-(*
-Naif Set Theory
-
+(*
+ Propositions as Types
*)
-include "basics/types.ma".
-include "basics/bool.ma".
-(*
-In this Chapter we shall develop a naif theory of sets represented as
-characteristic predicates over some universe A, that is as objects of type
-A→Prop.
-For instance the empty set is defined by the always false function: *)
-definition empty_set ≝ λA:Type[0].λa:A.False.
-notation "\emptyv" non associative with precedence 90 for @{'empty_set}.
-interpretation "empty set" 'empty_set = (empty_set ?).
+include "tutorial/chapter3.ma".
+include "basics/logic.ma".
-(* Similarly, a singleton set contaning containing an element a, is defined
-by by the characteristic function asserting equality with a *)
+(* In the previous section, we introduced many interesting logical connectives
+by means of inductive definitions in the sort Prop. Do the same constructions
+make any sense in Type? The answer is yes! Not only they make sense, but they
+are indeed very familiar constructions: cartesian product, disjoint sum, empty
+and singleton types, and ``sigma types'' (disjoint union of families of types
+indexed over a given base type). This is not a coincidence, but a consequence of
+a strong principle known under the name of ``Propositions as Types analogy'' (or
+Curry-Howard correspondence).
-definition singleton ≝ λA.λx,a:A.x=a.
-(* notation "{x}" non associative with precedence 90 for @{'sing_lang $x}. *)
-interpretation "singleton" 'singl x = (singleton ? x).
+We shall first discuss the constructions, and then we shall come back on the
+general analogy.*)
-(* The membership relation between an element of type A and a set S:A →Prop is
-simply the predicate resulting from the application of S to a.
-The operations of union, intersection, complement and substraction
-are easily defined in terms of the propositional connectives of dijunction,
-conjunction and negation *)
+(***************************** Cartesian Product ******************************)
-definition union : ∀A:Type[0].∀P,Q.A → Prop ≝ λA,P,Q,a.P a ∨ Q a.
-interpretation "union" 'union a b = (union ? a b).
+(* The cartesian product of two types A and B is defined as follows: *)
-definition intersection : ∀A:Type[0].∀P,Q.A→Prop ≝ λA,P,Q,a.P a ∧ Q a.
-interpretation "intersection" 'intersects a b = (intersection ? a b).
+inductive Prod (A,B:Type[0]) : Type[0] ≝
+ pair : A → B → Prod A B.
+
+(* Observe that the definition is identical to the definition of the logical
+conjunction, but for the fact that the sort Prop has been replaced by Type[0].
-definition complement ≝ λU:Type[0].λA:U → Prop.λw.¬ A w.
-interpretation "complement" 'not a = (complement ? a).
+The following declarations allows us to use the canonical notations A × B for
+the product and 〈a,b〉 for the pair of the two elements a and b. *)
-definition substraction := λU:Type[0].λA,B:U → Prop.λw.A w ∧ ¬ B w.
-interpretation "substraction" 'minus a b = (substraction ? a b).
+notation "hvbox(x break × y)" with precedence 70 for @{ 'product $x $y}.
+interpretation "Cartesian product" 'product A B = (Prod A B).
-(* Finally, we use implication to define the inclusion relation between
-sets *)
+notation "hvbox(\langle term 19 a, break term 19 b\rangle)"
+ with precedence 90 for @{ 'pair $a $b}.
+interpretation "Pair construction" 'pair x y = (pair ?? x y).
-definition subset: ∀A:Type[0].∀P,Q:A→Prop.Prop ≝ λA,P,Q.∀a:A.(P a → Q a).
-interpretation "subset" 'subseteq a b = (subset ? a b).
+(* We can define the two projections fst and snd by a simple case analysis of
+the term: *)
-(*
-Set Equality
+definition fst ≝ λA,B.λp:A×B. match p with [pair a b ⇒ a].
+definition snd ≝ λA,B.λp:A×B. match p with [pair a b ⇒ b].
-Two sets are equals if and only if they have the same elements, that is,
-if the two characteristic functions are extensionally equivalent: *)
+(* As in the case of inductive propositions, Matita automatically generates
+elimination principles for A × B. In this case, however, it becomes interesting
+to consider the possibility that the proposition towards which we are
+eliminating a given pair p:A × B$ contains a copy of p itself. In other words,
+if we have p:A × B in the current context, it is possible that p also occurs in
+the current goal, that is that the current goal "depends" on p.
-definition eqP ≝ λA:Type[0].λP,Q:A → Prop.∀a:A.P a ↔ Q a.
-notation "A =1 B" non associative with precedence 45 for @{'eqP $A $B}.
-interpretation "extensional equality" 'eqP a b = (eqP ? a b).
+A typical example is in the proof of the so called ``surjective pairing''
+property: *)
-(*
-This notion of equality is different from the intensional equality of
-functions; the fact it defines an equivalence relation must be explicitly
-proved: *)
+lemma surjective_pair: ∀A,B.∀p:A×B. p = 〈fst ?? p, snd ?? p〉.
-lemma eqP_sym: ∀U.∀A,B:U →Prop.
- A =1 B → B =1 A.
-#U #A #B #eqAB #a @iff_sym @eqAB qed.
-
-lemma eqP_trans: ∀U.∀A,B,C:U →Prop.
- A =1 B → B =1 C → A =1 C.
-#U #A #B #C #eqAB #eqBC #a @iff_trans // qed.
+(* where p explicitly occurs in the conclusion. The proof is straightforward: we
+introduce A, B and p and proceed by cases on p: since p is a pair, the only
+possible case is that it is of the form 〈a,b〉 for some a and b.*)
-(* For the same reason, we must also prove that all the operations behave well
-with respect to eqP: *)
+#A #B #p cases p #a #b
-lemma eqP_union_r: ∀U.∀A,B,C:U →Prop.
- A =1 C → A ∪ B =1 C ∪ B.
-#U #A #B #C #eqAB #a @iff_or_r @eqAB qed.
-
-lemma eqP_union_l: ∀U.∀A,B,C:U →Prop.
- B =1 C → A ∪ B =1 A ∪ C.
-#U #A #B #C #eqBC #a @iff_or_l @eqBC qed.
-
-lemma eqP_intersect_r: ∀U.∀A,B,C:U →Prop.
- A =1 C → A ∩ B =1 C ∩ B.
-#U #A #B #C #eqAB #a @iff_and_r @eqAB qed.
-
-lemma eqP_intersect_l: ∀U.∀A,B,C:U →Prop.
- B =1 C → A ∩ B =1 A ∩ C.
-#U #A #B #C #eqBC #a @iff_and_l @eqBC qed.
+(* At this point the goal looks like
+ 〈a,b〉 = 〈fst A B 〈a,b〉, snd A B 〈a,b〉〉
+and the two sides of the equation are convertible. *)
+// qed.
+
+(* When we call cases p we are actually applying the dependent elimination
+principle Prod_ind for the product, so it becomes interesting to have a look at
+it (we shall postpone the discussion of the way it is generated in a later
+section):
-lemma eqP_substract_r: ∀U.∀A,B,C:U →Prop.
- A =1 C → A - B =1 C - B.
-#U #A #B #C #eqAB #a @iff_and_r @eqAB qed.
+ ∀A,B.∀P:A×B →Prop.(∀a:A,∀b:B. P 〈a,b〉) → ∀x:A×B.P x
-lemma eqP_substract_l: ∀U.∀A,B,C:U →Prop.
- B =1 C → A - B =1 A - C.
-#U #A #B #C #eqBC #a @iff_and_l /2/ qed.
+The previous principle has a very natural backward reading: in order to prove
+that the property (P x) holds for any x of type A×B it is enough to prove P〈a,b〉
+for any a:A and b:B. *)
-(*
-Simple properties of sets
+(******************************* Disjoint Sum *********************************)
-We can now prove several properties of the previous set-theoretic operations.
-In particular, union is commutative and associative, and the empty set is an
-identity element: *)
+(* By reflecting in Type the definition of the logical disjunction we obtain the
+disjoint union (the sum) of two types: *)
-lemma union_empty_r: ∀U.∀A:U→Prop.
- A ∪ ∅ =1 A.
-#U #A #w % [* // normalize #abs @False_ind /2/ | /2/]
-qed.
+inductive Sum (A,B:Type[0]) : Type[0] ≝
+ inl : A → Sum A B
+| inr : B → Sum A B.
-lemma union_comm : ∀U.∀A,B:U →Prop.
- A ∪ B =1 B ∪ A.
-#U #A #B #a % * /2/ qed.
+notation "hvbox(a break + b)"
+ left associative with precedence 55 for @{ 'plus $a $b }.
-lemma union_assoc: ∀U.∀A,B,C:U → Prop.
- A ∪ B ∪ C =1 A ∪ (B ∪ C).
-#S #A #B #C #w % [* [* /3/ | /3/ ] | * [/3/ | * /3/]
-qed.
+interpretation "Disjoint union" 'plus A B = (Sum A B).
-(* In the same way we prove commutativity and associativity for set
-interesection *)
+(* The two constructors are the left and right injections. The dependent
+elimination principle Sum_ind has the following shape:
-lemma cap_comm : ∀U.∀A,B:U →Prop.
- A ∩ B =1 B ∩ A.
-#U #A #B #a % * /2/ qed.
+ ∀A,B.∀P:A+B →Prop.
+ (∀a:A.P (inl A B a)) → (∀b:B.P (inr A B b)) → ∀x:A×B.P x
+
+that is, in order to prove the property (P x) for any x:A+B it is enough to
+prove P (inl A B a) and P (inr A B b) for all a:A and b:B. *)
+
+(************************* Empty Type and Unit Type ***************************)
-lemma cap_assoc: ∀U.∀A,B,C:U→Prop.
- A ∩ (B ∩ C) =1 (A ∩ B) ∩ C.
-#U #A #B #C #w % [ * #Aw * /3/ | * * /3/ ]
-qed.
+(* The counterparts of False and True are, respectively, an empty type and a
+singleton type: *)
-(* We can also easily prove idempotency for union and intersection *)
+inductive void : Type[0] ≝.
+inductive unit : Type[0] ≝ it: unit.
-lemma union_idemp: ∀U.∀A:U →Prop.
- A ∪ A =1 A.
-#U #A #a % [* // | /2/] qed.
+(* The elimination principle void_ind for void is simply
-lemma cap_idemp: ∀U.∀A:U →Prop.
- A ∩ A =1 A.
-#U #A #a % [* // | /2/] qed.
+ ∀P:void→Prop.∀x:void.P x
+
+stating that any property is true for an element of type void (since we have no
+such element).
-(* We conclude our examples with a couple of distributivity theorems, and a
-characterization of substraction in terms of interesection and complementation. *)
+Similarly, the elimination principle for the unit type is:
-lemma distribute_intersect : ∀U.∀A,B,C:U→Prop.
- (A ∪ B) ∩ C =1 (A ∩ C) ∪ (B ∩ C).
-#U #A #B #C #w % [* * /3/ | * * /3/]
-qed.
+ ∀P:unit→Prop. P it → ∀x:unit.P x
-lemma distribute_substract : ∀U.∀A,B,C:U→Prop.
- (A ∪ B) - C =1 (A - C) ∪ (B - C).
-#U #A #B #C #w % [* * /3/ | * * /3/]
-qed.
+Indeed, in order to prove that a property is true for any element of the unit
+type, it is enough to prove it for the unique (canonical) inhabitant "it".
-lemma substract_def:∀U.∀A,B:U→Prop. A-B =1 A ∩ ¬B.
-#U #A #B #w normalize /2/
-qed.
+As an exercise, let us prove that all inhabitants of unit are equal to each
+other: *)
-(*
-Bool vs. Prop
+lemma eq_unit: ∀a,b:unit. a = b.
-In several situation it is important to assume to have a decidable equality
-between elements of a set U, namely a boolean function eqb: U→U→bool such that
-for any pair of elements a and b in U, (eqb x y) is true if and only if x=y.
-A set equipped with such an equality is called a DeqSet: *)
+(* The idea is to proceed by cases on a and b: we have only one possibility,
+namely a=it and b=it, hence we end up to prove it=it, that is trivial. Here is
+the proof: *)
-record DeqSet : Type[1] ≝ { carr :> Type[0];
- eqb: carr → carr → bool;
- eqb_true: ∀x,y. (eqb x y = true) ↔ (x = y)
-}.
+#a cases a #b cases b // qed. (* also: * * // qed. *)
-(* We use the notation == to denote the decidable equality, to distinguish it
-from the propositional equality. In particular, a term of the form a==b is a
-boolean, while a=b is a proposition. *)
+(* It is interesting to observe that we get exactly the same behavior by
+directly applying unit_ind instead of proceeding by cases. In fact, this is an
+alternative proof: *)
-notation "a == b" non associative with precedence 45 for @{ 'eqb $a $b }.
-interpretation "eqb" 'eqb a b = (eqb ? a b).
+lemma eq_unit_again: ∀a,b:unit. a = b.
+@unit_ind @unit_ind // qed.
-(*
-Small Scale Reflection
+(********************* Sigma Types and dependent matching *********************)
-It is convenient to have a simple way to reflect a proof of the fact
-that (eqb a b) is true into a proof of the proposition (a = b); to this aim,
-we introduce two operators "\P" and "\b". *)
+(* As a final example, let us consider ``type'' variants of the existential
+quantifier; in this case we have two interesting possibilities: *)
-notation "\P H" non associative with precedence 90
- for @{(proj1 … (eqb_true ???) $H)}.
+inductive Sig (A:Type[0]) (Q:A → Prop) : Type[0] ≝
+ Sig_intro: ∀x:A. Q x → Sig A Q.
-notation "\b H" non associative with precedence 90
- for @{(proj2 … (eqb_true ???) $H)}.
-
-(* If H:eqb a b = true, then \P H: a = b, and conversely if h:a = b, then
-\b h: eqb a b = true. Let us see an example of their use: the following
-statement asserts that we can reflect a proof that eqb a b is false into
-a proof of the proposition a ≠ b. *)
+inductive DPair (A:Type[0]) (Q:A → Type[0]) : Type[0] ≝
+ DPair_intro: ∀x:A. Q x → DPair A Q.
+
+(* In the first case (traditionally called sigma type), an element of type
+(Sig A P) is a pair (Sig_intro ?? a h) where a is an element of type A and h is a
+proof that the property (P a) holds.
+
+A suggestive reading of (Sig A P) is as the subset of A enjoying the property P,
+that is {a:A | P(a)}.
+
+In the second case, an element of DPair A B is a "dependent" pair
+(DPair_intro ?? a h) where a is element of type A and h maps a into an element in
+(B a); the intuition is to look at DProd A B as a (disjoint) family of sets B_a
+indexed over elements a:A.
+
+In both case it is possible to define projections extracting the two components
+of the pair. Let us discuss the case of the sigma type (the other one is
+analogous).
+
+Extracting the first component (the element) is easy: *)
+
+definition Sig_fst ≝ λA:Type[0].λP:A→Prop.λx:Sig A P.
+ match x with [Sig_intro a h ⇒ a].
-lemma eqb_false: ∀S:DeqSet.∀a,b:S.
- (eqb ? a b) = false ↔ a ≠ b.
+(* Getting the second component is a bit trickier. The first problem is
+the type of the result: given a pair (Sig_intro ?? a h): Sig A P, the type of
+h is (P a), that is P applied to the first argument of the pair of which we want
+to extract the second component!
+Luckily, in a language with dependent types, it is not a problem to write such a
+a type:
-(* We start the proof introducing the hypothesis, and then split the "if" and
-"only if" cases *)
+ definition Sig_snd : ∀A,P.∀x:Sig A P.P (Sig_fst A P x) ≝ ...
+
+A subtler problem is met when we define the body. If we just write
+
+ definition Sig_snd : ∀A,P.∀x.Sig A P → P (Sig_fst A P x) ≝ λA,P,x.
+ match x with [Sig_intro a h ⇒ h].
+
+the system will complain about the type of h. The point is that the type of this
+term depends on a, that is, in a not so trivial way, from the input argument x.
+In this situations, the type inference algorithm of Matita requires a little
+help: in particular the user is asked to explicitly provide the return type of
+the match expression, that is a map uniformly expressing the type of all
+branches of the case as a function of the matched expression.
+In our case we only have one branch and the return type is
+
+ λx.P (Sig\_fst A P x)
+
+We declare such a type immediately after the match, introduces by the keyword
+``return'': *)
+
+definition Sig_snd : ∀A,P.∀x:Sig A P.P (Sig_fst A P x) ≝ λA,P,x.
+ match x return λx.P (Sig_fst A P x) with [Sig_intro a h ⇒ h].
+
+(* Remark: The careful reader may have possibly remarked that we also did a
+systematic abuse of the arrow symbol: the expression A → B was sometimes
+interpreted as the "implication" between A and B and sometimes as the "function
+space" between A and B. The reason of this fact is that, actually, in a
+foundational system like the Calculus of Construction, they are the very same
+notion: we only distinguish them according to the sort of the resulting
+expression.
+Similarly for the dependent product ∏x:A.B: if the resulting expression is of
+sort Prop we shall look at it as a "universal quantification" (using the
+notation ∀x:A.B), while if it is in Type we shall typically regard it as a
+generalized cartesian product of a family of types B indexed over a base type A.
+*)
+
+(************************ Kolmogorov interpretation ***************************)
+
+(* The previous analogy between propositions and types is a consequence of a
+deep philosophical interpretation of the notion of proof in terms of a
+constructive procedure that transforms proofs of the premises into a proof of
+the conclusion. This usually referred to as Kolmogorov interpretation, or
+Brouwer–Heyting–Kolmogorov (BHK) interpretation.
+
+The interpretation states what is intended to be a proof of a given formula φ,
+and is specified by induction on the structure of A:
+
+- a proof of P∧Q is a pair 〈a,b〉 where a is a proof of P and b is a proof of Q;
+- a proof of P∨Q is a pair 〈a,b〉 where either a is 0 and b is a proof of P, or
+ a=1 and b is a proof of Q;
+- a proof of P→Q is a function f which transforms a proof of x:P into a proof of
+ (f x):Q;
+- a proof of ∃x:S.φ(x) is a pair 〈a,b〉 where a is an element of S, and b is a
+ proof of φ(a);
+- a proof of ∀x:S.ϕ(x) is a function f which transforms an element x of S into a
+ proof of \varphi(a);
+- the formula ¬P is defined as P → False, so a proof of it is a function f which
+ transforms a proof of P into a proof of False;
+- there is no proof of False.
+
+For instance, a proof of the formula P → P is a function transforming a proof of
+P into a proof of P: but we can just take the identity!
+
+You can explicitly exploit this idea for writing proofs in Matita. Instead of
+declaring a lemma and proving it interactively, you may define your lemma as if
+it was the type of an expression, and directly proceed to inhabit it with its
+proof: *)
+
+definition trivial: ∀P:Prop.P→P ≝ λP,h.h.
+
+(* It is interesting to observe that this is really the very same proof
+(intensionally!!) that would be produce interactively, as testified by the
+following fragment of code: *)
+
+lemma trivial1: ∀P:Prop.P→P. #P #h @h qed.
+lemma same_proofs: trivial = trivial1. @refl qed.
+
+(* Even more interestingly, we can do the opposite, namely define functions
+interactively.
+Suppose for instance that we need to define a function with the following
+type: ∀A,B,C:Type[0]. (A → B → C) → A× B → C.
+If we just declare the type of the function followed by a fullstop, Matita will
+start an interactive session completely analogous to a proof session, and we can
+use the usual tactics to ``close the goal'', that is to inhabit the type. *)
+
+definition uncurrying: ∀A,B,C:Type[0]. (A→B→C)→A×B→C.
+#A #B #C #f * @f qed.
+
+(********************** The Curry-Howard correspondence ***********************)
+
+(* The philosophical ideas contained in the BHK interpretation of a proof as a
+constructive procedure building a proof of the conclusion from proofs of the
+hypothesis get a precise syntactical systematization via the so called
+Curry-Howard correspondence, expressing a direct relationship between computer
+programs and proofs.
+The Curry-Howard correspondence, also known as proofs-as-programs analogy, is a
+generalization of a syntactic analogy between systems of formal logic and
+computational calculi first observed by Curry for Combinatory Logic and Hilbert-
+style deduction systems, and later by Howard for λ-calculus and Natural
+Deduction: in both cases the formation rules for well typed terms have precisely
+the same shape of the logical rules of introduction of the correspondent
+connective.
-#S #a #b % #H
+As a consequence, the expression
-(* The latter is easily reduced to prove the goal true=false under the assumption
-H1: a = b *)
- [@(not_to_not … not_eq_true_false) #H1
-
-(* since by assumption H false is equal to (a==b), by rewriting we obtain the goal
-true=(a==b) that is just the boolean version of H1 *)
+ M:A
+
+really has a double possible reading:
+
+- M is a term of type A
+- M is a proof of the proposition A
- <H @sym_eq @(\b H1)
+In both cases, M is a witness that the object A is "inhabited". A free variable
+x:A is an assumption about the validity of A (we assumeto have an unknown proof
+named x). Let us consider the cases of the introduction and elimination rule of
+the implication in natural deduction, that are particularly interesting:
-(* In the "if" case, we proceed by cases over the boolean equality (a==b); if
-(a==b) is false, the goal is trivial; the other case is absurd, since if (a==b) is
-true, then by reflection a=b, while by hypothesis a≠b *)
+ Γ,A ⊢ B Γ ⊢ A → B Γ ⊢ A
+ ---------- -------------------
+ Γ ⊢ A → B Γ ⊢ B
- |cases (true_or_false (eqb ? a b)) // #H1 @False_ind @(absurd … (\P H1) H)
- ]
-qed.
-
-(* We also introduce two operators "\Pf" and "\bf" to reflect a proof
-of (a==b)=false into a proof of a≠b, and vice-versa *)
-
-notation "\Pf H" non associative with precedence 90
- for @{(proj1 … (eqb_false ???) $H)}.
-
-notation "\bf H" non associative with precedence 90
- for @{(proj2 … (eqb_false ???) $H)}.
-
-(* The following statement proves that propositional equality in a
-DeqSet is decidable in the traditional sense, namely either a=b or a≠b *)
-
- lemma dec_eq: ∀S:DeqSet.∀a,b:S. a = b ∨ a ≠ b.
-#S #a #b cases (true_or_false (eqb ? a b)) #H
- [%1 @(\P H) | %2 @(\Pf H)]
-qed.
-
-(*
-Unification Hints
-
-A simple example of a set with a decidable equality is bool. We first define
-the boolean equality beqb, that is just the xand function, then prove that
-beqb b1 b2 is true if and only if b1=b2, and finally build the type DeqBool by
-instantiating the DeqSet record with the previous information *)
-
-definition beqb ≝ λb1,b2.
- match b1 with [ true ⇒ b2 | false ⇒ notb b2].
-
-notation < "a == b" non associative with precedence 45 for @{beqb $a $b }.
-
-lemma beqb_true: ∀b1,b2. iff (beqb b1 b2 = true) (b1 = b2).
-#b1 #b2 cases b1 cases b2 normalize /2/
-qed.
-
-definition DeqBool ≝ mk_DeqSet bool beqb beqb_true.
-
-(* At this point, we would expect to be able to prove things like the
-following: for any boolean b, if b==false is true then b=false.
-Unfortunately, this would not work, unless we declare b of type
-DeqBool (change the type in the following statement and see what
-happens). *)
-
-example exhint: ∀b:DeqBool. (b==false) = true → b=false.
-#b #H @(\P H)
-qed.
-
-(* The point is that == expects in input a pair of objects whose type must be the
-carrier of a DeqSet; bool is indeed the carrier of DeqBool, but the type inference
-system has no knowledge of it (it is an information that has been supplied by the
-user, and stored somewhere in the library). More explicitly, the type inference
-inference system, would face an unification problem consisting to unify bool
-against the carrier of something (a metavaribale) and it has no way to synthetize
-the answer. To solve this kind of situations, matita provides a mechanism to hint
-the system the expected solution. A unification hint is a kind of rule, whose rhd
-is the unification problem, containing some metavariables X1, ..., Xn, and whose
-left hand side is the solution suggested to the system, in the form of equations
-Xi=Mi. The hint is accepted by the system if and only the solution is correct, that
-is, if it is a unifier for the given problem.
-To make an example, in the previous case, the unification problem is bool = carr X,
-and the hint is to take X= mk_DeqSet bool beqb true. The hint is correct, since
-bool is convertible with (carr (mk_DeqSet bool beb true)). *)
-
-alias symbol "hint_decl" (instance 1) = "hint_decl_Type1".
-
-unification hint 0 ≔ ;
- X ≟ mk_DeqSet bool beqb beqb_true
-(* ---------------------------------------- *) ⊢
- bool ≡ carr X.
+The first step is to enrich the representation by decorating formulae with
+explicit proof terms. In particular, formulae in the context, being assumptions,
+will be decorated with free variables (different form each others), while the
+conclusion will be decorated with a term expression with free variables
+appearing in the context.
+
+Suppose Γ,x:A ⊢ M:B. What is the expected decoration for A → B?
+According to Kolmogorov interpretation, M is a procedure transforming the proof
+x:A into a proof of M:B; the proof of A → B is hence the function that, taken x
+as input, returns M, and our canonical notation for expressing such a function
+is λx:A.M. Hence we get:
+
+ Γ,x:A ⊢ M:B
+ -----------------
+ Γ ⊢ λx:A.M: A → B
-unification hint 0 ≔ b1,b2:bool;
- X ≟ mk_DeqSet bool beqb beqb_true
-(* ---------------------------------------- *) ⊢
- beqb b1 b2 ≡ eqb X b1 b2.
+that is precisely the typing rule for functions.
+
+Similarly, let us suppose that Γ ⊢ M:A → B, and Γ ⊢ N:A, and let us derive a
+natural proof decoration for the arrow elimination rule (that is just the well
+known modus ponens rule). Again, we get inspiration from Kolmogorov
+interpretation: a proof M:A → B is a function transforming a proof of A into a
+proof of B hence, since we have a proof N:A in order to get a proof of B it is
+enough to apply M to the argument N:
+
+ Γ ⊢ M: A → B Γ ⊢ N: A
+ --------------------------
+ Γ ⊢ (M N):B
+
+But this is nothing else than the typing rule for the application!
+
+There is still a point that deserve discussion: the most interesting point of p
+rograms is that they can be executed (in a functional setting, terms can be
+reduced to a normal form). By the Curry-Howard correspondence, this should
+correspond to a normalization procedure over proofs: does this operation make
+any sense at the logical level? Again the answer is yes: not only it makes
+sense, but it was independently investigated in the field of proof theory. A
+reducible expression corresponds to what is traditionally called a "cut". A cut
+is a logical ``detour'' typically arising by an introduction rule immediately
+followed by an elimination rule for the same connective, as in a beta-redex, where
+we have a direct "interaction" between an application and a λ-abstraction:
+
+ (beta-rule) λx.M N → M[N/x]
-(* After having provided the previous hints, we may rewrite example exhint
-declaring b of type bool. *)
+One of the main meta-theoretical results that is usually investigated on proof
+systems is if they enjoy the so called "cut-elimination" principle, that is if
+the cut-rule is "admissible": any proof that makes use of cuts can be turned
+into a cut-free proof. Since a cut is redex, a cut-free proof is a term that
+does not contain any redex, that is a term in normal form.
+Hence, the system enjoys cut-elimination if and only if the corresponding
+calculus is normalizing.
+
+Cut-elimination is a major tool of proof theory, with important implications on
+e.g. "consistency", "automation" and "interpolation".
+
+Let us make a final remark. If a program is a proof, then what does it
+correspond to the act of verifying the correctness of a proof M of some
+proposition A? Well, M is just an expression that is supposed to have type A, so
+"proof verification" is nothing else that "type checking"!
+
+This should also clarify that proof verification is a much easier task than
+proof search. While the former corresponds to type checking, the second
+one corresponds to the automatic synthesis of a program from a specification!
+
+The main ideas behind the Curry-Howard correspondence are summarized in the
+following Picture:
+
+ Curry-Howard Correspondence
+
+ Logic | Programming
+ |
+ proposition | type
+ proof | program
+ cut | reducible expression (redex)
+ cut free | normal form
+ cut elimination | normalization
+ correctness |type checking
+*)
+
+(******************************* Prop vs. Type ********************************)
+
+(* In view of the Curry-Howard analogy, the reader could wonder if there is any
+actual difference between the two sorts Prop and Type in a system like the
+Calculus of Constructions, or if it just a matter of flavour.
+
+In fact there is a subtle difference concerning the type of product types over
+the given sorts. Consider for instance a higher order sentence of
+the kind ∀P:Prop.P: this is just a sentence asserting that any proposition is
+true, and it looks natural to look at it as an object of sort Prop. However, if
+this is the case, when we are quantifying over ``all propositions' we are also
+implicitly quantifying over the proposition we are in the act of defining, that
+creates a strange and possibly dangerous circularity.
+
+In mathematics, definitions of this kind, where the definition (directly or
+indirectly) mentions or quantifies over the entity being defined are called
+"impredicative".
+
+The opposite notion of impredicativity is "predicativity", which essentially
+entails building stratified (or ramified) theories where quantification over
+lower levels results in objects of some new type.
+
+Impredicativity can be dangerous: a well known example is Russel's ``set of all
+sets'' resulting in famous logical paradox: does such a set contain itself as an
+element? If it does then by definition it should not, and if it does not then by
+definition it should.
+
+A predicative approach would consist in distinguishing e.g. between ``small''
+sets and ``large'' sets (collections), where the set of all small sets would
+result in a large set.
+
+In fact, if we look at ∀P:Type[0].P$ as a dependent product, and we identify
+Type[0] as a universe of (small) ``sets'', it would seem strange to conclude
+that quantifying over all (small) sets we could obtain another (small) set.
+
+In Matita, ∀P:Type[0].P has in fact type Type[1] and not Type[0], where Type[1]
+is also the type of Type[0].
+
+So, while Prop is impredicative, sorts of the kind Type[i] define a potentially
+infinite hierarchy of predicative sorts.
+
+The difference between predicative and impredicative sorts is in the typing rule
+for the product ∏x:A.B: if the sort of B is Prop then ∏x:A.B has type Prop
+whatever the sort of A is; if A:Type[i] and B:Type[j], then ∏x:A.B:Type[k] where
+Type[k] is the smallest sort strictly larger than Type[i] and Type[j]:
+
+ Γ ⊢ A : s Γ,x:A ⊢ B:Prop Γ ⊢ A : Type[i] Γ,x:A ⊢ B:Type[j]
+ --------------------------- ------------------------------------
+ Γ ⊢ ∏x:A.B : Prop Γ ⊢ ∏x:A.B : Type[k]
+
+for k = max{i,j}+1.
+
+It is worth to observe that in Matita, the index i is just a label: constraints
+among universes are declared by the user. The standard library (see
+"basics/pts.ma" declares a few of them, with the following relations:
+
+ Type[0] < Type[1] < Type[2] < …
+
+The impredicativity of Prop is not a problem from the point of view of the
+logical consistency, but there is a price to be paid for this: we are not
+allowed to eliminate Prop vs. Type[i]. In concrete terms this means that while
+we are allowed to build terms, types or even propositions by structural
+induction on terms, the only think we can do by structural induction/case
+analysis on proofs is to build other proofs.
+
+For instance, we know that a proof of p:A∨B is either a proof of A or a proof B,
+and one could be tempted to define a function that returns the boolean true in
+the first case and false otherwise, by performing a simple case analysis on p:
+
+definition discriminate_to_bool ≝ λA,B:Prop.λp:A∨B.
+ match p with
+ [ or_introl a ⇒ tt
+ | or_intror b ⇒ ff
+ ].
+
+If you type the previous definition, Matita will complain with the following
+error message: ``TypeChecker failure: Sort elimination not allowed: Prop towards
+Type[0]''.
+Even returning the two propositions True and False intead of two booleans would
+not work:
+
+definition discriminate_to_Prop ≝ λA,B:Prop.λp:A∨B.
+ match p with
+ [ or_introl a ⇒ True
+ | or_intror b ⇒ False
+ ].
-example exhint1: ∀b:bool. (b == false) = true → b = false.
-#b #H @(\P H)
-qed.
-
-(* The cartesian product of two DeqSets is still a DeqSet. To prove
-this, we must as usual define the boolen equality function, and prove
-it correctly reflects propositional equality. *)
-
-definition eq_pairs ≝
- λA,B:DeqSet.λp1,p2:A×B.(\fst p1 == \fst p2) ∧ (\snd p1 == \snd p2).
-
-lemma eq_pairs_true: ∀A,B:DeqSet.∀p1,p2:A×B.
- eq_pairs A B p1 p2 = true ↔ p1 = p2.
-#A #B * #a1 #b1 * #a2 #b2 %
- [#H cases (andb_true …H) normalize #eqa #eqb >(\P eqa) >(\P eqb) //
- |#H destruct normalize >(\b (refl … a2)) >(\b (refl … b2)) //
- ]
-qed.
-
-definition DeqProd ≝ λA,B:DeqSet.
- mk_DeqSet (A×B) (eq_pairs A B) (eq_pairs_true A B).
-
-(* Having an unification problem of the kind T1×T2 = carr X, what kind
-of hint can we give to the system? We expect T1 to be the carrier of a
-DeqSet C1, T2 to be the carrier of a DeqSet C2, and X to be DeqProd C1 C2.
-This is expressed by the following hint: *)
-
-unification hint 0 ≔ C1,C2;
- T1 ≟ carr C1,
- T2 ≟ carr C2,
- X ≟ DeqProd C1 C2
-(* ---------------------------------------- *) ⊢
- T1×T2 ≡ carr X.
-
-unification hint 0 ≔ T1,T2,p1,p2;
- X ≟ DeqProd T1 T2
-(* ---------------------------------------- *) ⊢
- eq_pairs T1 T2 p1 p2 ≡ eqb X p1 p2.
-
-example hint2: ∀b1,b2.
- 〈b1,true〉==〈false,b2〉=true → 〈b1,true〉=〈false,b2〉.
-#b1 #b2 #H @(\P H).
-qed-.
\ No newline at end of file
+The error message is the same as before: in both cases the sort of the branches
+is Type[0]. The only think we can do is to return other proof, like in the
+following example: *)
+
+definition or_elim ≝ λA,B,C:Prop.λp:A∨B.λf:A→C.λg:B→C.
+ match p with
+ [ or_introl a ⇒ f a
+ | or_intror b ⇒ g b
+ ].
+
+(* Exercise: repeat the previous examples in interactive mode, eliminating the
+hypothesis p:A∨B. *)
+
+
-(*
-Effective searching
-
-The fact of being able to decide, via a computable boolean function, the
-equality between elements of a given set is an essential prerequisite for
-effectively searching an element of that set inside a data structure. In this
-section we shall define several boolean functions acting on lists of elements in
-a DeqSet, and prove some of their properties.*)
-
-include "basics/lists/list.ma".
-include "tutorial/chapter4.ma".
-
-(* The first function we define is an effective version of the membership relation,
-between an element x and a list l. Its definition is a straightforward recursion on
-l.*)
-
-let rec memb (S:DeqSet) (x:S) (l: list S) on l ≝
- match l with
- [ nil ⇒ false
- | cons a tl ⇒ (x == a) ∨ memb S x tl
- ].
-
-notation < "\memb x l" non associative with precedence 90 for @{'memb $x $l}.
-interpretation "boolean membership" 'memb a l = (memb ? a l).
-
-(* We can now prove several interesing properties for memb:
-- memb_hd: x is a member of x::l
-- memb_cons: if x is a member of l than x is a member of a::l
-- memb_single: if x is a member of [a] then x=a
-- memb_append: if x is a member of l1@l2 then either x is a member of l1
- or x is a member of l2
-- memb_append_l1: if x is a member of l1 then x is a member of l1@l2
-- memb_append_l2: if x is a member of l2 then x is a member of l1@l2
-- memb_exists: if x is a member of l, than l can decomposed as l1@(x::l2)
-- not_memb_to_not_eq: if x is not a member of l and y is, then x≠y
-- memb_map: if a is a member of l, then (f a) is a member of (map f l)
-- memb_compose: if a is a member of l1 and b is a meber of l2 than
- (op a b) is a member of (compose op l1 l2)
+(*
+ More Data Types
*)
-lemma memb_hd: ∀S,a,l. memb S a (a::l) = true.
-#S #a #l normalize >(proj2 … (eqb_true S …) (refl S a)) //
-qed.
+include "basics/logic.ma".
-lemma memb_cons: ∀S,a,b,l.
- memb S a l = true → memb S a (b::l) = true.
-#S #a #b #l normalize cases (a==b) normalize //
-qed.
+(******************************** Option Type *********************************)
-lemma memb_single: ∀S,a,x. memb S a (x::[]) = true → a = x.
-#S #a #x normalize cases (true_or_false … (a==x)) #H
- [#_ >(\P H) // |>H normalize #abs @False_ind /2/]
-qed.
+(* Matita may only define total functions. However, not always we may return a
+meaningful value: for instance, working on natural numbers, the predecessor of
+0 is undefined. In these situations, we may either return a default value
+(usually, pred 0 = 0), or decide to return an "option type" as a result. An
+Option type is a polymorphic type that represents encapsulation of an optional
+value. It consists of either an empty constructor (traditionally called None),
+or a constructor encapsulating the original data type A (written Some A): *)
-lemma memb_append: ∀S,a,l1,l2.
-memb S a (l1@l2) = true → memb S a l1= true ∨ memb S a l2 = true.
-#S #a #l1 elim l1 normalize [#l2 #H %2 //]
-#b #tl #Hind #l2 cases (a==b) normalize /2/
-qed.
-
-lemma memb_append_l1: ∀S,a,l1,l2.
- memb S a l1= true → memb S a (l1@l2) = true.
-#S #a #l1 elim l1 normalize
- [normalize #le #abs @False_ind /2/
- |#b #tl #Hind #l2 cases (a==b) normalize /2/
- ]
-qed.
-
-lemma memb_append_l2: ∀S,a,l1,l2.
- memb S a l2= true → memb S a (l1@l2) = true.
-#S #a #l1 elim l1 normalize //
-#b #tl #Hind #l2 cases (a==b) normalize /2/
-qed.
-
-lemma memb_exists: ∀S,a,l.memb S a l = true → ∃l1,l2.l=l1@(a::l2).
-#S #a #l elim l [normalize #abs @False_ind /2/]
-#b #tl #Hind #H cases (orb_true_l … H)
- [#eqba @(ex_intro … (nil S)) @(ex_intro … tl) >(\P eqba) //
- |#mem_tl cases (Hind mem_tl) #l1 * #l2 #eqtl
- @(ex_intro … (b::l1)) @(ex_intro … l2) >eqtl //
- ]
-qed.
-
-lemma not_memb_to_not_eq: ∀S,a,b,l.
- memb S a l = false → memb S b l = true → a==b = false.
-#S #a #b #l cases (true_or_false (a==b)) //
-#eqab >(\P eqab) #H >H #abs @False_ind /2/
-qed.
+inductive option (A:Type[0]) : Type[0] ≝
+ None : option A
+ | Some : A → option A.
-lemma memb_map: ∀S1,S2,f,a,l. memb S1 a l= true →
- memb S2 (f a) (map … f l) = true.
-#S1 #S2 #f #a #l elim l normalize [//]
-#x #tl #memba cases (true_or_false (a==x))
- [#eqx >eqx >(\P eqx) >(\b (refl … (f x))) normalize //
- |#eqx >eqx cases (f a==f x) normalize /2/
- ]
-qed.
-
-lemma memb_compose: ∀S1,S2,S3,op,a1,a2,l1,l2.
- memb S1 a1 l1 = true → memb S2 a2 l2 = true →
- memb S3 (op a1 a2) (compose S1 S2 S3 op l1 l2) = true.
-#S1 #S2 #S3 #op #a1 #a2 #l1 elim l1 [normalize //]
-#x #tl #Hind #l2 #memba1 #memba2 cases (orb_true_l … memba1)
- [#eqa1 >(\P eqa1) @memb_append_l1 @memb_map //
- |#membtl @memb_append_l2 @Hind //
- ]
-qed.
-
-(*
-Unicity
-
-If we are interested in representing finite sets as lists, is is convenient
-to avoid duplications of elements. The following uniqueb check this property.
-*)
-
-let rec uniqueb (S:DeqSet) l on l : bool ≝
- match l with
- [ nil ⇒ true
- | cons a tl ⇒ notb (memb S a tl) ∧ uniqueb S tl
- ].
-
-(* unique_append l1 l2 add l1 in fornt of l2, but preserving unicity *)
-
-let rec unique_append (S:DeqSet) (l1,l2: list S) on l1 ≝
+(* The type option A is isomorphic to the disjoint sum between A and unit. The
+two bijections are simply defined as
+follows:
+\begin{lstlisting}[language=grafite]
+definition option_to_sum ≝ λA.λx:option A.
+ match x with
+ [ None ⇒ inl ?? it
+ | Some a ⇒ inr ?? a ].
+\end{lstlisting}
+
+\begin{lstlisting}[language=grafite]
+definition sum_to_option ≝ λA.λx:unit+A.
+ match x with
+ [ inl a ⇒ None A
+ | inr a ⇒ Some A a ].
+\end{lstlisting}
+The fact that going from the option type to the sum and back again we get the
+original element follows from a straightforward case analysis:
+\begin{lstlisting}[language=grafite]
+lemma option_bij1: ∀A,x. sum_to_option A (option_to_sum A x) = x.
+#A * // qed.
+\end{lstlisting}
+The other direction is just slightly more complex, since we need to exploit
+the fact that the unit type contains a single element: we could use lemma ... or
+proceed by cases on the unit element.
+\begin{lstlisting}[language=grafite]
+lemma option_bij2: ∀A,x. option_to_sum A (sum_to_option A x) = x.
+#A * // * // qed.
+\end{lstlisting}
+We shall see more examples of functions involving the otion type in the
+next section.
+
+\subsection{Lists}
+The option type, as well as the cartesian product or the sum are simple
+examples of polymorphic types, that is types that depend in a uniform
+and effective way on other types. This form of polymorphism (sometimes
+called {\em parametric} polymporphism to distinguish it from the
+{\em ad hoc} polymorphism
+typical of object oriented languages) is a major feature of modern
+functional programming languages: in many intersting cases,
+it allows to write {\em generic} functions operating on inputs without
+depending on their type. Typical examples can be found on lists.
+For instance, appending two lists is an operation that is essentially
+independent from the type $A$ of the elements in the list, and we would like
+to write a {\em single} append function working parametrically on $A$.
+
+The first step is to define a type for lists polimorphic over the
+type $A$ of its elements:
+\begin{lstlisting}[language=grafite]
+inductive list (A:Type[0]) : Type[0] :=
+ | nil: list A
+ | cons: A -> list A -> list A.
+\end{lstlisting}
+The deinition should be clear: a list is either empty (\verb+nil+) or
+is is obtained by concatenating (\verb+cons+) an element of type $A$
+to a given list. In other word, the type of lists is the smallest inductive
+type generated by the two constructors \verb+nil+ and \verb+cons+.
+
+The first element of a list is called its {\em head}. If the list
+is empty the head is undefined: as discussed in the previous section,
+we should either return a ``default''
+element, or decide to return an option type.
+We have an additional complication in this
+case, since the ``default'' element should have type $A$ (as the head),
+and we know nothing about $A$ (it could even be an empty type!). We
+have no way to guess a canonical element, and the only
+possibility (along this approach) is to take it in input. These
+are the two options:
+\begin{lstlisting}[language=grafite]
+definition hd ≝ λA.λl: list A.λd:A.
+ match l with [ nil ⇒ d | cons a _ ⇒ a].
+
+definition option_hd ≝
+ λA.λl:list A. match l with
+ [ nil ⇒ None ?
+ | cons a _ ⇒ Some ? a ].
+\end{lstlisting}
+What remains of a list after removing its head is called the {\em tail}
+of the list. Again, the tail of an empty list is undefined, but in
+this case the result must be a list, and we have a canonical inhabitant
+of lists, that is the empty list. So, it is natural to extend the
+definition of tail saying that the tail of nil is nil (that is
+very similar to say that the predecessor of $0$ is $0$):
+\begin{lstlisting}[language=grafite]
+definition tail ≝ λA.λl: list A.
+ match l with [ nil ⇒ [] | cons hd tl ⇒ tl].
+\end{lstlisting}
+
+Using destruct, it is easy to prove that \verb+cons+ is injective
+in both arguments.
+\begin{lstlisting}[language=grafite]
+lemma cons_injective_l : ∀A.∀a1,a2:A.∀l1,l2.a1::l1 = a2::l2 → a1 = a2.
+#A #a1 #a2 #l1 #l2 #Heq destruct // qed.
+
+lemma cons_injective_r : ∀A.∀a1,a2:A.∀l1,l2.a1::l1 = a2::l2 → l1 = l2.
+#A #a1 #a2 #l1 #l2 #Heq destruct // qed.
+\end{lstlisting}
+The append function is defined by recursion over the first list:
+\begin{lstlisting}[language=grafite]
+let rec append A (l1: list A) l2 on l1 ≝
match l1 with
+ [ nil ⇒ l2
+ | cons hd tl ⇒ hd :: append A tl l2 ].
+\end{lstlisting}
+We leave to the reader to prove that \verb+append+ is associative, and
+that \verb+nil+ is a neutral element for it.
+
+\input{list_notation.tex}
+
+We conclude this section discussing another important operation
+over lists, namely the reverse funtion, retrurning
+a list with the same elements of the original list but in reverse order.
+
+One could define the revert operation as follows:
+\begin{lstlisting}[language=grafite]
+let rec rev S l1 on l1 ≝
+ match l1 with
+ [ nil ⇒ nil
+ | cons a tl ⇒ rev A tl @ [a]
+ ].
+\end{lstlisting}
+However, this implementation is sligtly inefficient, since it has a
+quadratic complexity.
+A better solution consists in exploiting an accumulator, passing through
+the following auxilary function:
+\begin{lstlisting}[language=grafite]
+let rec rev_append S (l1,l2:list S) on l1 ≝
+ match l1 with
[ nil ⇒ l2
- | cons a tl ⇒
- let r ≝ unique_append S tl l2 in
- if memb S a r then r else a::r
+ | cons a tl ⇒ rev_append S tl (a::l2)
].
-lemma memb_unique_append: ∀S,a,l1,l2.
- memb S a (unique_append S l1 l2) = true → memb S a l1= true ∨ memb S a l2 = true.
-#S #a #l1 elim l1 normalize [#l2 #H %2 //]
-#b #tl #Hind #l2 cases (true_or_false … (a==b)) #Hab >Hab normalize /2/
-cases (memb S b (unique_append S tl l2)) normalize
- [@Hind | >Hab normalize @Hind]
-qed.
-
-lemma unique_append_elim: ∀S:DeqSet.∀P: S → Prop.∀l1,l2.
- (∀x. memb S x l1 = true → P x) → (∀x. memb S x l2 = true → P x) →
- ∀x. memb S x (unique_append S l1 l2) = true → P x.
-#S #P #l1 #l2 #Hl1 #Hl2 #x #membx cases (memb_unique_append … membx) /2/
-qed.
-
-lemma unique_append_unique: ∀S,l1,l2. uniqueb S l2 = true →
- uniqueb S (unique_append S l1 l2) = true.
-#S #l1 elim l1 normalize // #a #tl #Hind #l2 #uniquel2
-cases (true_or_false … (memb S a (unique_append S tl l2)))
-#H >H normalize [@Hind //] >H normalize @Hind //
-qed.
-
-(*
-Sublists
-
-*)
-definition sublist ≝
- λS,l1,l2.∀a. memb S a l1 = true → memb S a l2 = true.
-
-lemma sublist_length: ∀S,l1,l2.
- uniqueb S l1 = true → sublist S l1 l2 → |l1| ≤ |l2|.
-#S #l1 elim l1 //
-#a #tl #Hind #l2 #unique #sub
-cut (∃l3,l4.l2=l3@(a::l4)) [@memb_exists @sub //]
-* #l3 * #l4 #eql2 >eql2 >length_append normalize
-applyS le_S_S <length_append @Hind [@(andb_true_r … unique)]
->eql2 in sub; #sub #x #membx
-cases (memb_append … (sub x (orb_true_r2 … membx)))
- [#membxl3 @memb_append_l1 //
- |#membxal4 cases (orb_true_l … membxal4)
- [#eqxa @False_ind lapply (andb_true_l … unique)
- <(\P eqxa) >membx normalize /2/ |#membxl4 @memb_append_l2 //
- ]
- ]
-qed.
-
-lemma sublist_unique_append_l1:
- ∀S,l1,l2. sublist S l1 (unique_append S l1 l2).
-#S #l1 elim l1 normalize [#l2 #S #abs @False_ind /2/]
-#x #tl #Hind #l2 #a
-normalize cases (true_or_false … (a==x)) #eqax >eqax
-[<(\P eqax) cases (true_or_false (memb S a (unique_append S tl l2)))
- [#H >H normalize // | #H >H normalize >(\b (refl … a)) //]
-|cases (memb S x (unique_append S tl l2)) normalize
- [/2/ |>eqax normalize /2/]
-]
-qed.
-
-lemma sublist_unique_append_l2:
- ∀S,l1,l2. sublist S l2 (unique_append S l1 l2).
-#S #l1 elim l1 [normalize //] #x #tl #Hind normalize
-#l2 #a cases (memb S x (unique_append S tl l2)) normalize
-[@Hind | cases (a==x) normalize // @Hind]
-qed.
-
-lemma decidable_sublist:∀S,l1,l2.
- (sublist S l1 l2) ∨ ¬(sublist S l1 l2).
-#S #l1 #l2 elim l1
- [%1 #a normalize in ⊢ (%→?); #abs @False_ind /2/
- |#a #tl * #subtl
- [cases (true_or_false (memb S a l2)) #memba
- [%1 whd #x #membx cases (orb_true_l … membx)
- [#eqax >(\P eqax) // |@subtl]
- |%2 @(not_to_not … (eqnot_to_noteq … true memba)) #H1 @H1 @memb_hd
- ]
- |%2 @(not_to_not … subtl) #H1 #x #H2 @H1 @memb_cons //
- ]
- ]
-qed.
-
-(*
-Filtering
-*)
-
-lemma memb_filter_true: ∀S,f,a,l.
- memb S a (filter S f l) = true → f a = true.
-#S #f #a #l elim l [normalize #H @False_ind /2/]
-#b #tl #Hind cases (true_or_false (f b)) #H
-normalize >H normalize [2:@Hind]
-cases (true_or_false (a==b)) #eqab
- [#_ >(\P eqab) // | >eqab normalize @Hind]
-qed.
-
-lemma memb_filter_memb: ∀S,f,a,l.
- memb S a (filter S f l) = true → memb S a l = true.
-#S #f #a #l elim l [normalize //]
-#b #tl #Hind normalize (cases (f b)) normalize
-cases (a==b) normalize // @Hind
+definition reverse ≝λS.λl.rev_append S l [].
+\end{lstlisting}
+
+{\bf Exercise} Prove the following results:
+\begin{lstlisting}[language=grafite]
+lemma reverse_cons : ∀S,a,l. reverse S (a::l) = (reverse S l)@[a].
+
+lemma reverse_append: ∀S,l1,l2.
+ reverse S (l1 @ l2) = (reverse S l2)@(reverse S l1).
+
+lemma reverse_reverse : ∀S,l. reverse S (reverse S l) = l.
+\end{lstlisting}
+
+
+\subsection{List iterators}
+In general, an iterator for some data type is an object that enables to
+traverse its structure performing a given computation.
+There is a canonical set of iterators over lists deriving from
+the programming experience. In this section we shall review a few of
+them, proving some of their properties.
+
+A first, famous iterator is the map function, that
+applies a function $f$ to each element of a list $[a_1,\dots , a_n]$,
+building the list $[f\,a_1; \dots ; f\,a_n]$.
+\begin{lstlisting}[language=grafite]
+let rec map (A,B:Type[0]) (f: A → B) (l:list A) on l: list B ≝
+ match l with [ nil ⇒ nil ? | cons x tl ⇒ f x :: (map A B f tl)].
+\end{lstlisting}
+The \verb+map+ function distributes over \verb+append+, as can be simply
+proved by induction over the first list:
+\begin{lstlisting}[language=grafite]
+lemma map_append : ∀A,B,f,l1,l2.
+ (map A B f l1) @ (map A B f l2) = map A B f (l1@l2).
+#A #B #f #l1 elim l1
+ [ #l2 @refl | #h #t #IH #l2 normalize // ] qed.
+\end{lstlisting}
+The most important itarator is traditionally called {\em fold}; we
+shall only consider the so called fold-right variant,
+that takes in input a function $f:A\to B \to B$, a list $[a_1; \dots; a_n]$
+of elements of type $A$, a base element $b:B$ and returns
+the value $f\,a_1\,(f\,a_2\,(\dots (f\,a_n\,b) \dots))$.
+\begin{lstlisting}[language=grafite]
+let rec foldr (A,B:Type[0]) (f:A → B → B) (b:B) (l:list A) on l :B ≝
+ match l with [ nil ⇒ b | cons a l ⇒ f a (foldr A B f b l)].
+\end{lstlisting}
+Many other interesting functions can be defined in terms of \verb+foldr+.
+A first interesting example is the filter function, taking in input
+a boolean test \verb+p+ and a list $l$ and returning the sublist of all
+elements of $l$ satisfying the test.
+\begin{lstlisting}[language=grafite]
+definition filter ≝
+ λT.λp:T → bool.
+ foldr T (list T) (λx,l0.if p x then x::l0 else l0) (nil T).
+\end{lstlisting}
+As another example, we can generalize the map function to an operation
+taking in input a binary function $f:A\to B \to C$, two lists
+$[a_1;\dots ;a_n]$ and $[b_1;\dots ;b_m]$ and returning the list
+$[f\,a_1\,b_1; \dots ;f\,a_n\,b_1;\dots ;f\,a_1\,b_m; f\,a_n\,b_m]$
+\begin{lstlisting}[language=grafite]
+definition compose ≝ λA,B,C.λf:A→B→C.λl1,l2.
+ foldr ?? (λi,acc.(map ?? (f i) l2)@acc) [ ] l1.
+\end{lstlisting}
+The folded function $\lambda$\verb+i,acc.(map ?? (f i) l2)@acc+
+takes in input an element $i$ and an accumulator,
+and add to the accumulator the values obtained by mapping the
+partial instantiation $f\,i$ on the elements of $l_2$. This function
+is iterated over
+all elements of the first list $l_1$, starting with an empty accumulator.
+
+\subsection{Naive Set Theory}
+Given a ``universe'' $U$ (an arbitrary type $U:Type$), a naive but
+practical way to deal with ``sets'' in $U$ is simply to identify
+them with their characteristic property, that is to as functions
+of type $U\to \mbox{Prop}$.
+
+For instance, the empty set is defined by the False predicate;
+a singleton set $\{x\}$ is defined by the property that its elements are
+equal to $x$.
+
+\begin{lstlisting}[language=grafite]
+definition empty_set ≝ λU:Type[0].λu:U.False.
+definition singleton ≝ λU.λx,u:U.x=u.
+\end{lstlisting}
+
+The membership relation is trivial: an element $x$ is in the set
+(defined by the property) $P$ if and only if it enjoyes the property:
+
+\begin{lstlisting}[language=grafite]
+definition member ≝ λU:Type[0].λu:U.P→Prop:U.P u.
+\end{lstlisting}
+The operations of union, intersection, complementation and substraction
+are defined in a straightforward way, in terms of logical operations:
+
+\begin{lstlisting}[language=grafite]
+definition union ≝ λU:Type[0].λP,Q:U → Prop.λa.P a ∨ Q a.
+
+definition intersection ≝ λU:Type[0].λP,Q:U → Prop.λa..P a ∧ Q a.
+
+definition complement ≝ λU:Type[0].λA:U → Prop.λw.¬ A w.
+
+definition substraction ≝ λU:Type[0].λA,B:U → Prop.λw.A w ∧ ¬ B w.
+\end{lstlisting}
+More interesting are the notions of subset and equality. Given
+two sets $P$ and $Q$, $P$ is a subset of $Q$ when any element $u$ in
+$P$ is also in $Q$, that is when $P\,u$ implies $Q\,u$.
+\begin{lstlisting}[language=grafite]
+definition subset: ∀A:Type[0].∀P,Q:A→Prop.Prop ≝ λA,P,Q.∀a:A.(P a → Q a).
+\end{lstlisting}
+Then, two sets $P$ and $Q$ are equal when $P \subseteq Q$ and
+$Q \subseteq P$, or equivalently when for any element $u$,
+$P\,u \iff Q\,u$.
+\begin{lstlisting}[language=grafite]
+definition eqP ≝ λA:Type.λP,Q:A → Prop.∀a:A.P a ↔ Q a.
+\end{lstlisting}
+We shall use the infix notation $\approx$ for the previous notion of
+equality.
+It is important to observe that the \verb+eqP+ is
+different from Leibniz equality of section \ref{}. As we already
+observed, Leibniz equality is a pretty syntactical (or intensional)
+notion, that is a notion concerning the {\em connotation} of an object
+and not its {\em denotation} (the shape assumed by the object, and
+not the information it is supposed to convey). Intensionality stands
+in contrast with {\em extensionality}, referring to principles that
+judge objects to be equal if they enjoy {\em a given subset} of external,
+observable properties (e.g. the property of having the same
+elements). For instance given two sets $A$ and $B$ we can prove that
+$A\cup B \approx B\cup A$, since they have the same elements, but
+there is no way to prove $A\cup B = B\cup A$.
+
+The main practical consequence is that, while we can always exploit
+a Leibniz equality between two terms $t_1$ and $t_2$ for rewriting
+one into the other (in fact, this is the {\em essence} of Leibniz
+equality), we cannot do the same for an extensional equality (we
+could only rewrite inside propositions ``compatible'' with our
+external observation of the objects).
+
+\subsection{Sets with decidable equality}
+We say that a property $P:A\to Prop$ over a datatype $A$
+is {\em decidable} when we have an
+effective way to assess the validity of $P\,a$ for any
+$a:A$. As a consequence
+of G\"odel incompleteness theorem, not every predicate
+is decidable: for instance, extensional equality on sets is not decidable,
+in general.
+
+Decidability can be expressed in several possible ways. A convenient
+one is to state it in terms of the computability of the characterisitc
+function of the predicate $P$, that is in terms of the existence of a
+function $Pb: A \to \mbox{bool}$ such that
+ \[P\,a \iff Pb\,a = \mbox{true}\]
+Decidability is an important issue, and since equality is an essential
+predicate, it is always interesting to try to understand when a given
+notion of equality is decidable or not.
+
+In particular, Leibniz equality on inductively generated datastructures
+is often decidable, since we can simply write a decision algorithm by
+structural induction on the terms. For instance we can define
+characteristic functions for booleans and natural numbers in the following
+way:
+\begin{lstlisting}[language=grafite]
+definition beqb ≝ λb1,b2.
+ match b1 with [ true ⇒ b2 | false ⇒ notb b2].
+
+let rec neqb n m ≝
+match n with
+ [ O ⇒ match m with [ O ⇒ true | S q ⇒ false]
+ | S p ⇒ match m with [ O ⇒ false | S q ⇒ neqb p q]
+ ].
+\end{lstlisting}
+It is so important to know if Leibniz equality for a given type is decidable
+that turns out to be convenient to pack this information into a single
+algebraic datastructure, called \verb+DeqSet+:
+\begin{lstlisting}[language=grafite]
+record DeqSet : Type[1] ≝
+ { carr :> Type[0];
+ eqb: carr → carr → bool;
+ eqb_true: ∀x,y. (eqb x y = true) ↔ (x = y)}.
+
+notation "a == b" non associative with precedence 45 for @{ 'eqb $a $b }.
+interpretation "eqb" 'eqb a b = (eqb ? a b).
+\end{lstlisting}
+A \verb+DeqSet+ is simply a record composed by a carrier type \verb+carr+,
+a boolean function \verb+eqb: carr+$\to$\verb+carr+$\to$\verb+carr+ representing
+the decision algorithm, and a {\em proof} \verb+eqb_true+ that the decision algorithm
+is correct. The \verb+:>+ symbol declares \verb+carr+ as a {\em coercion} from a DeqSet to its
+carrier type. We use the infix notation ``=='' for the decidable
+equality \verb+eqb+ of the carrier.
+
+Suppose we proved the following facts (do it as an exercise)
+\begin{lstlisting}[language=grafite]
+lemma beqb_true_to_eq: ∀b1,b2. beqb b1 b2 = true ↔ b1 = b2.
+lemma neqb_true_to_eq: ∀n,m:nat. eqb n m = true → n = m.
+\end{lstlisting}
+Then, we can build the following records:
+\begin{lstlisting}[language=grafite]
+definition DeqBool ≝ mk_DeqSet bool beqb beqb_true_to_eq.
+definition DeqNat ≝ mk_DeqSet nat eqb eqbnat_true.
+\end{lstlisting}
+Note that, since we declare a coercion from the \verb+DeqSet+ to its
+carrier, the expression \verb+0:DeqNat+ is well typed, and it is understood
+by the system as \verb+0:carr DeqNat+ (that is, coercions are automatically
+insterted by the system, if required).
+
+%It is convenient to have a simple way to {\em reflect} \cite{ssreflect}
+%a proof of
+%$(eqb\;x\;y = true)$ into a proof of $(x = y)$; we use the two operators
+%$\mathit{\\P}$ and $\mathit{\\b}$ to this aim.
+
+\subsection{Unification hints}
+Suppose now to write an expression of the following kind:
+\[b == \mbox{false}\]
+that, after removing the notation, is equivalent to
+\[eqb\;?\;b\;\mbox{false}\]
+The system knows that $\mbox{false}$ is a boolean, so in order to
+accept the expression, he should {\em figure out} some \verb+DeqSet+
+having \verb+bool+
+as carrier. This is not a trivial operation: Matita should either try
+to synthetize it (that is a complex operation known as {\em narrowing})
+or look into its database of results for a possible solution.
+
+In this situations, we may suggest the intended solution to Matita using
+the mechanism of unification hints \cite{hints}. The concrete syntax
+of unification hints is a bit involved: we strongly recommend the user
+to include the file \verb+hints_declaration.ma+ that allows you
+to write thing in a more convenient and readable way.
+\begin{lstlisting}[language=grafite]
+include ``hints_declaration.ma''.
+\end{lstlisting}
+For instance, the following declaration tells Matita that a solution
+of the equation \verb+bool = carr X+ is $X = DeqBool$.
+\begin{lstlisting}[language=grafite]
+unification hint 0 ≔ ;
+ X ≟ DeqBool
+(* ---------------------------------------- *) ⊢
+ bool ≡ carr X.
+\end{lstlisting}
+Using the previous notation (we suggest the reader to cut and paste it
+from previous hints) the hint is expressed as an inference rule.
+The conclusion of the rule is the unification problem that we intend to solve,
+containing one or more variables $X_1,\dots X_b$. The premises of
+the rule are the solutions we are suggesting to Matita. In general,
+unification hints should only be used when there exists just one
+``intended'' (canonical) solution for the given equation.
+When you declare a unification hint, Matita verifies it correctness by
+instantiating the unification problem with the hinted solution,
+and checking the convertibility between the two sides of the equation.
+\begin{lstlisting}[language=grafite]
+example exhint: ∀b:bool. (b == false) = true → b = false.
+#b #H @(\P H).
qed.
+\end{lstlisting}
+In a similar way,
+
+\begin{lstlisting}[language=grafite]
+unification hint 0 ≔ b1,b2:bool;
+ X ≟ mk_DeqSet bool beqb beqb_true
+(* ---------------------------------------- *) ⊢
+ beqb b1 b2 ≡ eqb X b1 b2.
+\end{lstlisting}
+
+
+\subsection{Prop vs. bool}
+It is probably time to make a discussion about \verb+Prop+ and \verb+bool+, and
+their different roles in a the Calculus of Inductive Constructions.
+
+Inhabitants of the sort \verb+Prop+ are logical propositions. In turn, logical
+propositions \verb+P:Prop+ can be inhabitated by their proofs $H:P$. Since, in
+general, the validity of a property $P$is not decidable, the role of the
+proof is to provide a witness of the correctness of $P$ that the system can
+automatically check.
+
+On the other hand, bool is just an inductive datatype with two constructors true
+and false: these are terms, not types, and cannot be inhabited.
+Logical connectives on bool are computable functions, defined by their
+truth tables, using case analysis.
-lemma memb_filter: ∀S,f,l,x. memb S x (filter ? f l) = true →
-memb S x l = true ∧ (f x = true).
-/3/ qed.
-
-lemma memb_filter_l: ∀S,f,x,l. (f x = true) → memb S x l = true →
-memb S x (filter ? f l) = true.
-#S #f #x #l #fx elim l normalize //
-#b #tl #Hind cases (true_or_false (x==b)) #eqxb
- [<(\P eqxb) >(\b (refl … x)) >fx normalize >(\b (refl … x)) normalize //
- |>eqxb cases (f b) normalize [>eqxb normalize @Hind| @Hind]
- ]
-qed.
-
-(*
-Exists
-
-*)
-
-let rec exists (A:Type[0]) (p:A → bool) (l:list A) on l : bool ≝
-match l with
-[ nil ⇒ false
-| cons h t ⇒ orb (p h) (exists A p t)
-].
\ No newline at end of file
+ Prop and bool are, in a sense, complementary: the former is more suited for
+abstract logical reasoning,
+while the latter allows, in some situations, for brute-force evaluation.
+Suppose for instance that we wish to prove that the $2 \le 3!$. Starting
+from the definition of the factorial function and properties of the less
+or equal relation, the previous proof could take several steps. Suppose
+however we proved the decidability of $\le$, that is the existence of
+a boolean function $\mbox{leb}$ reflecting $\le$ in the sense that
+\[n \le m \iff \mbox{leb}\,n,m = \mbox{true}\]
+Then, we could reduce the verification of $2 \le 3!$ to that of
+of $\mbox{leb}\,2,3! = \mbox{true}$ that just requires a mere computation!
+
+\subsection{Finite Sets}
+A finite set is a record consisting of a DeqSet $A$,
+a list of elements of type A enumerating all the elements of the set,
+and the proofs that the enumeration does not contain repetitions
+and is complete (\verb+memb+ is the membership operation on
+lists, defined in the obvious way, and the question mark is an {\em
+implicit parameter} automatically filled in by the system).
+\begin{lstlisting}[language=grafite]
+record FinSet : ≝
+{ FinSetcarr:> DeqSet;
+ enum: list FinSetcarr;
+ enum_unique: uniqueb FinSetcarr enum = true;
+ enum_complete : ∀x:FinSetcarr. memb ? x enum = true}.
+\end{lstlisting}
+The library provides many operations for building new \verb+FinSet+s by
+composing
+existing ones: for example,
+if \verb+A+ and \verb+B+ have type \verb+FinSet+, then
+also \verb+option A+, \verb+A+$\times$\verb+B+,
+\verb+A+$\oplus$\verb+B+ are finite sets. In all these cases, unification
+hints are used to suggest the {\em intended} finite set structure
+associated with them, that makes their use quite transparent to the user.
+
+A particularly intersting case is that of the arrow type \verb+A+$\to$\verb+B+.
+We may define the graph of \verb+f:A+$\to$\verb+B+, as the
+set (sigma type) of all pairs $\langle a,b \rangle$ such that
+$f (a) = b$.
+\begin{lstlisting}[language=grafite]
+definition graph_of ≝ λA,B.λf:A→B. Σp:A×B.f (\fst p) = \snd p.
+\end{lstlisting}
+In case the equality is decidable, we may effectively
+enumerate all elements of the graph by simply filtering from
+pairs in \verb+A+$\times$\verb+B+ those satisfying the
+test $\lambda$\verb+p.f (\fst p) == \snd p)+:
+
+\begin{lstlisting}[language=grafite]
+definition graph_enum ≝ λA,B:FinSet.λf:A→B.
+ filter ? (λp.f (\fst p) == \snd p) (enum (FinProd A B)).
+\end{lstlisting}
+The proofs that this enumeration does not contain repetitions and
+is complete are straightforward.
+
+\subsection{Vectors}
+A vector of length $n$ of elements of type $A$
+is simply defined in Matita as a record composed
+by a list of elements of type $A$, plus a proof that the
+list has the expected length. Vectors are a paradigmatic example
+of {\em dependent} type, that is of a type whose definition depends on
+a term.
+\begin{lstlisting}[language=grafite]
+record Vector (A:Type) (n:nat): Type ≝
+{ vec :> list A;
+ len: length ? vec = n }.
+\end{lstlisting}
+Given a list $l$ we may trivially turn it into a vector of length $|l|$;
+we just need to prove that $|l|=|l|$ that follows from the reflexivity
+of equality.
+\begin{lstlisting}[language=grafite]
+lemma Vector_of_list ≝ λA,l.mk_Vector A (|l|) l (refl ??).
+\end{lstlisting}
+
+Most functions operating on lists can be naturally extended to
+vectors: interesting cases are \verb+vec_append+, concatenating vectors,
+and \verb+vec_map+, mapping a function $f$ on all elements of the input
+vector and returning a vector of the same dimension of the input one.
+
+Since a vector is automatically coerced, if needed, to the list of
+its elements, we may simply use typical functions over lists (such as
+\verb+nth+) to extract/filter specific components.
+
+The function \verb+change_vec A n v a i+ replaces the content of
+the vector $v$ at position $i$ with the value $a$.
+
+The most frequent operation on vectors for the purposes or our work
+is their comparison. In this case, we have essentially two possible
+approaches: we may proceed component-wise, using the following lemma
+\begin{lstlisting}[language=grafite]
+lemma eq_vec: ∀A,n.∀v1,v2:Vector A n.∀d.
+ (∀i. i < n → nth i A v1 d = nth i A v2 d) → v1 = v2.
+\end{lstlisting}
+or alternatively we may manipulate vectors exploiting the commutation
+or idempotency of \verb+change_vec+ and its interplay with
+other operations.