X-Git-Url: http://matita.cs.unibo.it/gitweb/?a=blobdiff_plain;f=helm%2Fsoftware%2Fmatita%2Fnlibrary%2Ftopology%2Figft.ma;h=075828b60e477b367d5757558d57b0088daacb0e;hb=8de75e1a28002270f98c13eb0933f1780c181eb5;hp=50a62bff4ec049e749c5429ef2b16707bb44ddfd;hpb=4286d368b6d902a4b54c4cf8590c387f2cdb90ea;p=helm.git diff --git a/helm/software/matita/nlibrary/topology/igft.ma b/helm/software/matita/nlibrary/topology/igft.ma index 50a62bff4..075828b60 100644 --- a/helm/software/matita/nlibrary/topology/igft.ma +++ b/helm/software/matita/nlibrary/topology/igft.ma @@ -1,90 +1,561 @@ -(*DOCBEGIN +(*D Matita Tutorial: inductively generated formal topologies ======================================================== -Small intro... +This is a not so short introduction to [Matita][2], based on +the formalization of the paper + +> Between formal topology and game theory: an +> explicit solution for the conditions for an +> inductive generation of formal topologies + +by Stefano Berardi and Silvio Valentini. + +The tutorial is by Enrico Tassi. + +The tutorial spends a considerable amount of effort in defining +notations that resemble the ones used in the original paper. We believe +this is a important part of every formalization, not only from the aesthetic +point of view, but also from the practical point of view. Being +consistent allows to follow the paper in a pedantic way, and hopefully +to make the formalization (at least the definitions and proved +statements) readable to the author of the paper. + +The formalization uses the ng (new generation) version of Matita +(that will be named 1.x when finally released). +Last stable release of the "old" system is named 0.5.7; the ng system +is coexisting with the old one in every development release +(named "nightly builds" in the download page of Matita) +with a version strictly greater than 0.5.7. + +To read this tutorial in HTML format, you need a decent browser +equipped with a unicode capable font. Use the PDF format if some +symbols are not displayed correctly. + +Orienteering +------------ + +The graphical interface of Matita is composed of three windows: +the script window, on the left, is where you type; the sequent +window on the top right is where the system shows you the ongoing proof; +the error window, on the bottom right, is where the system complains. +On the top of the script window five buttons drive the processing of +the proof script. From left to right they request the system to: + +- go back to the beginning of the script +- go back one step +- go to the current cursor position +- advance one step +- advance to the end of the script + +When the system processes a command, it locks the part of the script +corresponding to the command, such that you cannot edit it anymore +(without going back). Locked parts are coloured in blue. + +The sequent window is hyper textual, i.e. you can click on symbols +to jump to their definition, or switch between different notations +for the same expression (for example, equality has two notations, +one of them makes the type of the arguments explicit). + +Everywhere in the script you can use the `ncheck (term).` command to +ask for the type a given term. If you do that in the middle of a proof, +the term is assumed to live in the current proof context (i.e. can use +variables introduced so far). + +To ease the typing of mathematical symbols, the script window +implements two unusual input facilities: + +- some TeX symbols can be typed using their TeX names, and are + automatically converted to UTF-8 characters. For a list of + the supported TeX names, see the menu: View â¹ TeX/UTF-8 Table. + Moreover some ASCII-art is understood as well, like `=>` and `->` + to mean double or single arrows. + Here we recall some of these "shortcuts": + + - â can be typed with `\forall` + - λ can be typed with `\lambda` + - â can be typed with `\def` or `:=` + - â can be typed with `\to` or `->` + +- some symbols have variants, like the ⤠relation and â¼, â°, â . + The user can cycle between variants typing one of them and then + pressing ALT-L. Note that also letters do have variants, for + example W has Ω, ð and ð, L has Î, ð, and ð, F has Φ, ⦠+ Variants are listed in the aforementioned TeX/UTF-8 table. + +The syntax of terms (and types) is the one of the λ-calculus CIC +on which Matita is based. The main syntactical difference w.r.t. +the usual mathematical notation is the function application, written +`(f x y)` in place of `f(x,y)`. + +Pressing `F1` opens the Matita manual. + +CIC (as [implemented in Matita][3]) in a nutshell +------------------------------------------------- + +CIC is a full and functional Pure Type System (all products do exist, +and their sort is is determined by the target) with an impredicative sort +Prop and a predicative sort Type. It features both dependent types and +polymorphism like the [Calculus of Constructions][4]. Proofs and terms share +the same syntax, and they can occur in types. + +The environment used for in the typing judgement can be populated with +well typed definitions or theorems, (co)inductive types validating positivity +conditions and recursive functions provably total by simple syntactical +analysis (recursive calls are allowed only on structurally smaller subterms). +Co-recursive +functions can be defined as well, and must satisfy the dual condition, i.e. +performing the recursive call only after having generated a constructor (a piece +of output). + +The CIC λ-calculus is equipped with a pattern matching construct (match) on inductive +types defined in the environment. This construct, together with the possibility to +definable total recursive functions, allows to define eliminators (or constructors) +for (co)inductive types. The λ-calculus is also equipped with explicitly typed +local definitions (let in) that in the degenerate case work as casts (i.e. +the type annotation `(t : T)` is implemented as `let x : T â t in x`). + +Types are compare up to conversion. Since types may depend on terms, conversion +involves β-reduction, δ-reduction (definition unfolding), ζ-reduction (local +definition unfolding), ι-reduction (pattern matching simplification), +μ-reduction (recursive function computation) and ν-reduction (co-fixpoint +computation). + +Since we are going to formalize constructive and predicative mathematics +in an intensional type theory like CIC, we try to establish some terminology. +Type is the sort of sets equipped with the `Id` equality (i.e. an intensional, +not quotiented set). + +We write `Type[i]` to mention a Type in the predicative hierarchy +of types. To ease the comprehension we will use `Type[0]` for sets, +and `Type[1]` for classes. The index `i` is just a label: constraints among +universes are declared by the user. The standard library defines + +> Type[0] < Type[1] < Type[2] + + + +For every `Type[i]` there is a corresponding level of predicative +propositions `CProp[i]` (the C initial is due to historical reasons, and +stands for constructive, `PProp` would be more appropriate). +A predicative proposition cannot be eliminated toward +`Type[j]` unless it holds no computational content (i.e. it is an inductive proposition +with 0 or 1 constructors with propositional arguments, like `Id` and `And` +but not like `Or`). + +The distintion between predicative propositions and predicative data types +is a peculirity of Matita (for example in CIC as implemented by Coq they are the +same). The additional restriction of not allowing the elimination of a CProp +toward a Type makes the theory of Matita minimal in the following sense: + + + +Theorems proved in this setting can be reused in a classical framwork (forcing +Matita to collapse the hierarchy of constructive propositions). Alternatively, +one can decide to collapse predicative propositions and datatypes recovering the +Axiom of Choice (i.e. â really holds a content and can thus be eliminated to +provide a witness for a Σ). + +Formalization choices +--------------------- + +We will avoid using `Id` (Leibniz equality), +thus we will explicitly equip a set with an equivalence relation when needed. +We will call this structure a _setoid_. Note that we will +attach the infix `=` symbol only to the equality of a setoid, +not to Id. + + + +The standard library and the `include` command +---------------------------------------------- + +Some basic notions, like subset, membership, intersection and union +are part of the standard library of Matita. + +These notions come with some standard notation attached to them: + +- A ⪠B can be typed with `A \cup B` +- A â© B can be typed with `A \cap B` +- A ⬠B can be typed with `A \between B` +- x â A can be typed with `x \in A` +- Ω^A, that is the type of the subsets of A, can be typed with `\Omega ^ A` -Initial setup -------------- +The `include` command tells Matita to load a part of the library, +in particular the part that we will use can be loaded as follows: -The library, inclusion of `sets/sets.ma`, notation defined: Ω^A. -Symbols (see menu: View â¹ TeX/UTF-8 Table): +D*) -- `Ω` can be typed \Omega -- `â` \Forall -- `λ` \lambda -- `â` \def -- `â` -> +include "sets/sets.ma". -Virtuals, ALT-L, for example I changes into ð, finally ð. +(*D -DOCEND*) +Some basic results that we will use are also part of the sets library: -include "sets/sets.ma". +- subseteq\_union\_l: âA.âU,V,W:Ω^A.U â W â V â W â U ⪠V â W +- subseteq\_intersection\_r: âA.âU,V,W:Ω^A.W â U â W â V â W â U â© V -(*DOCBEGIN +Defining Axiom set +------------------ -Axiom set ---------- +A set of axioms is made of a set(oid) `S`, a family of sets `I` and a +family `C` of subsets of `S` indexed by elements `a` of `S` +and elements of `I(a)`. -records, ... +It is desirable to state theorems like "for every set of axioms, â¦" +without explicitly mentioning S, I and C. To do that, the three +components have to be grouped into a record (essentially a dependently +typed tuple). The system is able to generate the projections +of the record automatically, and they are named as the fields of +the record. So, given an axiom set `A` we can obtain the set +with `S A`, the family of sets with `I A` and the family of subsets +with `C A`. -DOCEND*) +D*) nrecord Ax : Type[1] â { - S:> setoid; - I: S â Type[0]; - C: âa:S. I a â Ω ^ S + S :> Type[0]; + I : S â Type[0]; + C : âa:S. I a â Ω^S }. -(*HIDE*) -notation "ð \sub( â¨aâ© )" non associative with precedence 70 for @{ 'I $a }. -notation "ð \sub ( â¨a,\emsp iâ© )" non associative with precedence 70 for @{ 'C $a $i }. -(*UNHIDE*) +(*D + +Forget for a moment the `:>` that will be detailed later, and focus on +the record definition. It is made of a list of pairs: a name, followed +by `:` and the its type. It is a dependently typed tuple, thus +already defined names (fields) can be used in the types that follow. + +Note that `S` is declared to be a `setoid` and not a Type. The original +paper probably also considers I to generate setoids, and both I and C +to be (dependent) morphisms. For the sake of simplicity, we will "cheat" and use +setoids only when strictly needed (i.e. where we want to talk about +equality). Setoids will play a role only when we will define +the alternative version of the axiom set. + +Note that the field `S` was declared with `:>` instead of a simple `:`. +This declares the `S` projection to be a coercion. A coercion is +a "cast" function the system automatically inserts when it is needed. +In that case, the projection `S` has type `Ax â setoid`, and whenever +the expected type of a term is `setoid` while its type is `Ax`, the +system inserts the coercion around it, to make the whole term well typed. + +When formalizing an algebraic structure, declaring the carrier as a +coercion is a common practice, since it allows to write statements like + + âG:Group.âx:G.x * x^-1 = 1 + +The quantification over `x` of type `G` is ill-typed, since `G` is a term +(of type `Group`) and thus not a type. Since the carrier projection +`carr` is a coercion, that maps a `Group` into the type of +its elements, the system automatically inserts `carr` around `G`, +obtaining `â¦âx: carr G.â¦`. + +Coercions are hidden by the system when it displays a term. +In this particular case, the coercion `S` allows to write (and read): + + âA:Ax.âa:A.⦠+ +Since `A` is not a type, but it can be turned into a `setoid` by `S` +and a `setoid` can be turned into a type by its `carr` projection, the +composed coercion `carr â S` is silently inserted. + +Implicit arguments +------------------ + +Something that is not still satisfactory, is that the dependent type +of `I` and `C` are abstracted over the Axiom set. To obtain the +precise type of a term, you can use the `ncheck` command as follows. -(*DOCBEGIN +D*) -Notation for the axiom set --------------------------- +(** ncheck I. *) (* shows: âA:Ax.A â Type[0] *) +(** ncheck C. *) (* shows: âA:Ax.âa:A.A â I A a â Ω^A *) -bla bla +(*D -DOCEND*) +One would like to write `I a` and not `I A a` under a context where +`A` is an axiom set and `a` has type `S A` (or thanks to the coercion +mechanism simply `A`). In Matita, a question mark represents an implicit +argument, i.e. a missing piece of information the system is asked to +infer. Matita performs Hindley-Milner-style type inference, thus writing +`I ? a` is enough: since the second argument of `I` is typed by the +first one, the first (omitted) argument can be inferred just +computing the type of `a` (that is `A`). + +D*) + +(** ncheck (âA:Ax.âa:A.I ? a). *) (* shows: âA:Ax.âa:A.I A a *) + +(*D + +This is still not completely satisfactory, since you have always to type +`?`; to fix this minor issue we have to introduce the notational +support built in Matita. + +Notation for I and C +-------------------- + +Matita is quipped with a quite complex notational support, +allowing the user to define and use mathematical notations +([From Notation to Semantics: There and Back Again][1]). + +Since notations are usually ambiguous (e.g. the frequent overloading of +symbols) Matita distinguishes between the term level, the +content level, and the presentation level, allowing multiple +mappings between the content and the term level. + +The mapping between the presentation level (i.e. what is typed on the +keyboard and what is displayed in the sequent window) and the content +level is defined with the `notation` command. When followed by +`>`, it defines an input (only) notation. + +D*) notation > "ð term 90 a" non associative with precedence 70 for @{ 'I $a }. notation > "ð term 90 a term 90 i" non associative with precedence 70 for @{ 'C $a $i }. +(*D + +The first notation defines the writing `ð a` where `a` is a generic +term of precedence 90, the maximum one. This high precedence forces +parentheses around any term of a lower precedence. For example `ð x` +would be accepted, since identifiers have precedence 90, but +`ð f x` would be interpreted as `(ð f) x`. In the latter case, parentheses +have to be put around `f x`, thus the accepted writing would be `ð (f x)`. + +To obtain the `ð` is enough to type `I` and then cycle between its +similar symbols with ALT-L. The same for `ð`. Notations cannot use +regular letters or the round parentheses, thus their variants (like the +bold ones) have to be used. + +The first notation associates `ð a` with `'I $a` where `'I` is a +new content element to which a term `$a` is passed. + +Content elements have to be interpreted, and possibly multiple, +incompatible, interpretations can be defined. + +D*) + interpretation "I" 'I a = (I ? a). interpretation "C" 'C a i = (C ? a i). -(*DOCBEGIN +(*D -The first definition --------------------- +The `interpretation` command allows to define the mapping between +the content level and the terms level. Here we associate the `I` and +`C` projections of the Axiom set record, where the Axiom set is an implicit +argument `?` to be inferred by the system. + +Interpretation are bi-directional, thus when displaying a term like +`C _ a i`, the system looks for a presentation for the content element +`'C a i`. + +D*) + +notation < "ð \sub( â¨aâ© )" non associative with precedence 70 for @{ 'I $a }. +notation < "ð \sub( â¨a,\emsp iâ© )" non associative with precedence 70 for @{ 'C $a $i }. + +(*D + +For output purposes we can define more complex notations, for example +we can put bold parentheses around the arguments of `ð` and `ð`, decreasing +the size of the arguments and lowering their baseline (i.e. putting them +as subscript), separating them with a comma followed by a little space. + +The first (technical) definition +-------------------------------- + +Before defining the cover relation as an inductive predicate, one +has to notice that the infinity rule uses, in its hypotheses, the +cover relation between two subsets, while the inductive predicate +we are going to define relates an element and a subset. + +An option would be to unfold the definition of cover between subsets, +but we prefer to define the abstract notion of cover between subsets +(so that we can attach a (ambiguous) notation to it). + +Anyway, to ease the understanding of the definition of the cover relation +between subsets, we first define the inductive predicate unfolding the +definition, and we later refine it with. + +D*) + +ninductive xcover (A : Ax) (U : Ω^A) : A â CProp[0] â +| xcreflexivity : âa:A. a â U â xcover A U a +| xcinfinity : âa:A.âi:ð a. (ây.y â ð a i â xcover A U y) â xcover A U a. -![bla bla][def-fish-rec] +(*D -DOCEND*) +We defined the xcover (x will be removed in the final version of the +definition) as an inductive predicate. The arity of the inductive +predicate has to be carefully analyzed: -ndefinition cover_set â λc:âA:Ax.Ω^A â A â CProp[0].λA,C,U. - ây.y â C â c A U y. +> (A : Ax) (U : Ω^A) : A â CProp[0] + +The syntax separates with `:` abstractions that are fixed for every +constructor (introduction rule) and abstractions that can change. In that +case the parameter `U` is abstracted once and for all in front of every +constructor, and every occurrence of the inductive predicate is applied to +`U` in a consistent way. Arguments abstracted on the right of `:` are not +constant, for example the xcinfinity constructor introduces `a â U`, +but under the assumption that (for every y) `y â U`. In that rule, the left +had side of the predicate changes, thus it has to be abstracted (in the arity +of the inductive predicate) on the right of `:`. + +D*) + +(** ncheck xcreflexivity. *) (* shows: âA:Ax.âU:Ω^A.âa:A.aâU â xcover A U a *) + +(*D + +We want now to abstract out `(ây.y â ð a i â xcover A U y)` and define +a notion `cover_set` to which a notation `ð a i â U` can be attached. + +This notion has to be abstracted over the cover relation (whose +type is the arity of the inductive `xcover` predicate just defined). + +Then it has to be abstracted over the arguments of that cover relation, +i.e. the axiom set and the set `U`, and the subset (in that case `ð a i`) +sitting on the left hand side of `â`. + +D*) + +ndefinition cover_set : + âcover: âA:Ax.Ω^A â A â CProp[0]. âA:Ax.âC,U:Ω^A. CProp[0] +â + λcover. λA, C,U. ây.y â C â cover A U y. + +(*D + +The `ndefinition` command takes a name, a type and body (of that type). +The type can be omitted, and in that case it is inferred by the system. +If the type is given, the system uses it to infer implicit arguments +of the body. In that case all types are left implicit in the body. + +We now define the notation `a â b`. Here the keywork `hvbox` +and `break` tell the system how to wrap text when it does not +fit the screen (they can be safely ignored for the scope of +this tutorial). We also add an interpretation for that notation, +where the (abstracted) cover relation is implicit. The system +will not be able to infer it from the other arguments `C` and `U` +and will thus prompt the user for it. This is also why we named this +interpretation `covers set temp`: we will later define another +interpretation in which the cover relation is the one we are going to +define. + +D*) -(* a \ltri b *) notation "hvbox(a break â b)" non associative with precedence 45 for @{ 'covers $a $b }. interpretation "covers set temp" 'covers C U = (cover_set ?? C U). +(*D + +The cover relation +------------------ + +We can now define the cover relation using the `â` notation for +the premise of infinity. + +D*) + ninductive cover (A : Ax) (U : Ω^A) : A â CProp[0] â -| creflexivity : âa:A. a â U â cover ? U a -| cinfinity : âa:A.âi:ð a. ð a i â U â cover ? U a. +| creflexivity : âa. a â U â cover A U a +| cinfinity : âa. âi. ð a i â U â cover A U a. +(** screenshot "cover". *) napply cover; nqed. +(*D + +Note that the system accepts the definition +but prompts the user for the relation the `cover_set` notion is +abstracted on. + + + +The horizontal line separates the hypotheses from the conclusion. +The `napply cover` command tells the system that the relation +it is looking for is exactly our first context entry (i.e. the inductive +predicate we are defining, up to α-conversion); while the `nqed` command +ends a definition or proof. + +We can now define the interpretation for the cover relation between an +element and a subset first, then between two subsets (but this time +we fix the relation `cover_set` is abstracted on). + +D*) + interpretation "covers" 'covers a U = (cover ? U a). interpretation "covers set" 'covers a U = (cover_set cover ? a U). +(*D + +We will proceed similarly for the fish relation, but before going +on it is better to give a short introduction to the proof mode of Matita. +We define again the `cover_set` term, but this time we build +its body interactively. In the λ-calculus Matita is based on, CIC, proofs +and terms share the same syntax, and it is thus possible to use the +commands devoted to build proof term also to build regular definitions. +A tentative semantics for the proof mode commands (called tactics) +in terms of sequent calculus rules are given in the +appendix. + +D*) + +ndefinition xcover_set : + âc: âA:Ax.Ω^A â A â CProp[0]. âA:Ax.âC,U:Ω^A. CProp[0]. + (** screenshot "xcover-set-1". *) +#cover; #A; #C; #U; (** screenshot "xcover-set-2". *) +napply (ây:A.y â C â ?); (** screenshot "xcover-set-3". *) +napply cover; (** screenshot "xcover-set-4". *) +##[ napply A; +##| napply U; +##| napply y; +##] +nqed. + +(*D[xcover-set-1] +The system asks for a proof of the full statement, in an empty context. + +The `#` command is the â-introduction rule, it gives a name to an +assumption putting it in the context, and generates a λ-abstraction +in the proof term. + +D[xcover-set-2] +We have now to provide a proposition, and we exhibit it. We left +a part of it implicit; since the system cannot infer it it will +ask for it later. +Note that the type of `ây:A.y â C â ?` is a proposition +whenever `?` is a proposition. + +D[xcover-set-3] +The proposition we want to provide is an application of the +cover relation we have abstracted in the context. The command +`napply`, if the given term has not the expected type (in that +case it is a product versus a proposition) it applies it to as many +implicit arguments as necessary (in that case `? ? ?`). + +D[xcover-set-4] +The system will now ask in turn the three implicit arguments +passed to cover. The syntax `##[` allows to start a branching +to tackle every sub proof individually, otherwise every command +is applied to every subproof. The command `##|` switches to the next +subproof and `##]` ends the branching. +D*) + +(*D + +The fish relation +----------------- + +The definition of fish works exactly the same way as for cover, except +that it is defined as a coinductive proposition. +D*) + ndefinition fish_set â λf:âA:Ax.Ω^A â A â CProp[0]. λA,U,V. âa.a â V ⧠f A U a. @@ -103,30 +574,152 @@ nqed. interpretation "fish set" 'fish A U = (fish_set fish ? U A). interpretation "fish" 'fish a U = (fish ? U a). +(*D + +Introduction rule for fish +--------------------------- + +Matita is able to generate elimination rules for inductive types, +but not introduction rules for the coinductive case. + +D*) + +(** ncheck cover_rect_CProp0. *) + +(*D + +We thus have to define the introduction rule for fish by co-recursion. +Here we again use the proof mode of Matita to exhibit the body of the +corecursive function. + +D*) + nlet corec fish_rec (A:Ax) (U: Ω^A) (P: Ω^A) (H1: P â U) - (H2: âa:A. a â P â âj: ð a. ð a j ⬠P): - âa:A. âp: a â P. a â U â ?. -#a; #p; napply cfish; (** screenshot "def-fish-rec". *) -##[ napply H1; napply p; -##| #i; ncases (H2 a p i); #x; *; #xC; #xP; @; ##[napply x] - @; ##[ napply xC ] napply (fish_rec ? U P); nassumption; + (H2: âa:A. a â P â âj: ð a. ð a j ⬠P): âa:A. âp: a â P. a â U â ?. + (** screenshot "def-fish-rec-1". *) +#a; #a_in_P; napply cfish; (** screenshot "def-fish-rec-2". *) +##[ nchange in H1 with (âb.bâP â bâU); (** screenshot "def-fish-rec-2-1". *) + napply H1; (** screenshot "def-fish-rec-3". *) + nassumption; +##| #i; ncases (H2 a a_in_P i); (** screenshot "def-fish-rec-5". *) + #x; *; #xC; #xP; (** screenshot "def-fish-rec-5-1". *) + @; (** screenshot "def-fish-rec-6". *) + ##[ napply x + ##| @; (** screenshot "def-fish-rec-7". *) + ##[ napply xC; + ##| napply (fish_rec ? U P); (** screenshot "def-fish-rec-9". *) + nassumption; + ##] + ##] ##] nqed. -notation "âU" non associative with precedence 55 -for @{ 'coverage $U }. +(*D +D[def-fish-rec-1] +Note the first item of the context, it is the corecursive function we are +defining. This item allows to perform the recursive call, but we will be +allowed to do such call only after having generated a constructor of +the fish coinductive type. + +We introduce `a` and `p`, and then return the fish constructor `cfish`. +Since the constructor accepts two arguments, the system asks for them. + +D[def-fish-rec-2] +The first one is a proof that `a â U`. This can be proved using `H1` and `p`. +With the `nchange` tactic we change `H1` into an equivalent form (this step +can be skipped, since the system would be able to unfold the definition +of inclusion by itself) + +D[def-fish-rec-2-1] +It is now clear that `H1` can be applied. Again `napply` adds two +implicit arguments to `H1 ? ?`, obtaining a proof of `? â U` given a proof +that `? â P`. Thanks to unification, the system understands that `?` is actually +`a`, and it asks a proof that `a â P`. + +D[def-fish-rec-3] +The `nassumption` tactic looks for the required proof in the context, and in +that cases finds it in the last context position. + +We move now to the second branch of the proof, corresponding to the second +argument of the `cfish` constructor. + +We introduce `i` and then we destruct `H2 a p i`, that being a proof +of an overlap predicate, give as an element and a proof that it is +both in `ð a i` and `P`. + +D[def-fish-rec-5] +We then introduce `x`, break the conjunction (the `*;` command is the +equivalent of `ncases` but operates on the first hypothesis that can +be introduced). We then introduce the two sides of the conjunction. + +D[def-fish-rec-5-1] +The goal is now the existence of a point in `ð a i` fished by `U`. +We thus need to use the introduction rule for the existential quantifier. +In CIC it is a defined notion, that is an inductive type with just +one constructor (one introduction rule) holding the witness and the proof +that the witness satisfies a proposition. + +> ncheck Ex. + +Instead of trying to remember the name of the constructor, that should +be used as the argument of `napply`, we can ask the system to find by +itself the constructor name and apply it with the `@` tactic. +Note that some inductive predicates, like the disjunction, have multiple +introduction rules, and thus `@` can be followed by a number identifying +the constructor. + +D[def-fish-rec-6] +After choosing `x` as the witness, we have to prove a conjunction, +and we again apply the introduction rule for the inductively defined +predicate `â§`. + +D[def-fish-rec-7] +The left hand side of the conjunction is trivial to prove, since it +is already in the context. The right hand side needs to perform +the co-recursive call. + +D[def-fish-rec-9] +The co-recursive call needs some arguments, but all of them are +in the context. Instead of explicitly mention them, we use the +`nassumption` tactic, that simply tries to apply every context item. + +D*) + +(*D + +Subset of covered/fished points +------------------------------- + +We now have to define the subset of `S` of points covered by `U`. +We also define a prefix notation for it. Remember that the precedence +of the prefix form of a symbol has to be higher than the precedence +of its infix form. + +D*) ndefinition coverage : âA:Ax.âU:Ω^A.Ω^A â λA,U.{ a | a â U }. +notation "âU" non associative with precedence 55 for @{ 'coverage $U }. + interpretation "coverage cover" 'coverage U = (coverage ? U). +(*D + +Here we define the equation characterizing the cover relation. +Even if it is not part of the paper, we proved that `â(U)` is +the minimum solution for +such equation, the interested reader should be able to reply the proof +with Matita. + +D*) + ndefinition cover_equation : âA:Ax.âU,X:Ω^A.CProp[0] â λA,U,X. âa.a â X â (a â U ⨠âi:ð a.ây.y â ð a i â y â X). ntheorem coverage_cover_equation : âA,U. cover_equation A U (âU). #A; #U; #a; @; #H; -##[ nelim H; #b; (* manca clear *) +##[ nelim H; #b; ##[ #bU; @1; nassumption; ##| #i; #CaiU; #IH; @2; @ i; #c; #cCbi; ncases (IH ? cCbi); ##[ #E; @; napply E; @@ -144,6 +737,14 @@ ntheorem coverage_min_cover_equation : ##] nqed. +(*D + +We similarly define the subset of points "fished" by `F`, the +equation characterizing `â(F)` and prove that fish is +the biggest solution for such equation. + +D*) + notation "âF" non associative with precedence 55 for @{ 'fished $F }. @@ -154,31 +755,46 @@ interpretation "fished fish" 'fished F = (fished ? F). ndefinition fish_equation : âA:Ax.âF,X:Ω^A.CProp[0] â λA,F,X. âa. a â X â a â F ⧠âi:ð a.ây.y â ð a i ⧠y â X. -ntheorem fised_fish_equation : âA,F. fish_equation A F (âF). -#A; #F; #a; @; (* bug, fare case sotto diverso da farlo sopra *) #H; ncases H; +ntheorem fished_fish_equation : âA,F. fish_equation A F (âF). +#A; #F; #a; @; (* *; non genera outtype che lega a *) #H; ncases H; ##[ #b; #bF; #H2; @ bF; #i; ncases (H2 i); #c; *; #cC; #cF; @c; @ cC; napply cF; ##| #aF; #H1; @ aF; napply H1; ##] nqed. -ntheorem fised_max_fish_equation : âA,F,G. fish_equation A F G â G â âF. +ntheorem fished_max_fish_equation : âA,F,G. fish_equation A F G â G â âF. #A; #F; #G; #H; #a; #aG; napply (fish_rec ⦠aG); #b; ncases (H b); #H1; #_; #bG; ncases (H1 bG); #E1; #E2; nassumption; nqed. -nrecord nAx : Type[2] â { - nS:> setoid; (*Type[0];*) +(*D + +Part 2, the new set of axioms +----------------------------- + +Since the name of defined objects (record included) has to be unique +within the same file, we prefix every field name +in the new definition of the axiom set with `n`. + +D*) + +nrecord nAx : Type[1] â { + nS:> Type[0]; nI: nS â Type[0]; nD: âa:nS. nI a â Type[0]; nd: âa:nS. âi:nI a. nD a i â nS }. -(* -TYPE f A â B, g : B â A, f â g = id, g â g = id. +(*D + +We again define a notation for the projections, making the +projected record an implicit argument. Note that, since we already have +a notation for `ð`, we just add another interpretation for it. The +system, looking at the argument of `ð`, will be able to choose +the correct interpretation. -a = b â I a = I b -*) +D*) notation "ð \sub ( â¨a,\emsp iâ© )" non associative with precedence 70 for @{ 'D $a $i }. notation "ð \sub ( â¨a,\emsp i,\emsp jâ© )" non associative with precedence 70 for @{ 'd $a $i $j}. @@ -190,22 +806,69 @@ interpretation "D" 'D a i = (nD ? a i). interpretation "d" 'd a i j = (nd ? a i j). interpretation "new I" 'I a = (nI ? a). +(*D + +The first result the paper presents to motivate the new formulation +of the axiom set is the possibility to define and old axiom set +starting from a new one and vice versa. The key definition for +such construction is the image of d(a,i). +The paper defines the image as + +> Im[d(a,i)] = { d(a,i,j) | j : D(a,i) } + +but this not so formal notation poses some problems. The image is +often used as the left hand side of the â predicate + +> Im[d(a,i)] â V + +Of course this writing is interpreted by the authors as follows + +> âj:D(a,i). d(a,i,j) â V + +If we need to use the image to define `ð ` (a subset of `S`) we are obliged to +form a subset, i.e. to place a single variable `{ here | ⦠}` of type `S`. + +> Im[d(a,i)] = { y | âj:D(a,i). y = d(a,i,j) } + +This poses no theoretical problems, since `S` is a setoid and thus equipped +with an equality. + +Unless we define two different images, one for stating that the image is â of +something and another one to define `ð`, we end up using always the latter. +Thus the statement `Im[d(a,i)] â V` unfolds to + +> âx:S. ( âj.x = d(a,i,j) ) â x â V + +That, up to rewriting with the equation defining `x`, is what we mean. +The technical problem arises later, when `V` will be a complex +construction that has to be proved extensional +(i.e. âx,y. x = y â x â V â y â V). + +D*) + +include "logic/equality.ma". + ndefinition image â λA:nAx.λa:A.λi. { x | âj:ð a i. x = ð a i j }. notation > "ðð¦ [ð term 90 a term 90 i]" non associative with precedence 70 for @{ 'Im $a $i }. -notation "ðð¦ [ð \sub ( â¨a,\emsp iâ© )]" non associative with precedence 70 for @{ 'Im $a $i }. +notation < "ðð¦ [ð \sub ( â¨a,\emsp iâ© )]" non associative with precedence 70 for @{ 'Im $a $i }. interpretation "image" 'Im a i = (image ? a i). +(*D + +Thanks to our definition of image, we can define a function mapping a +new axiom set to an old one and vice versa. Note that in the second +definition, when we give the `ð` component, the projection of the +Σ-type is inlined (constructed on the fly by `*;`) +while in the paper it was named `fst`. + +D*) + ndefinition Ax_of_nAx : nAx â Ax. #A; @ A (nI ?); #a; #i; napply (ðð¦ [ð a i]); nqed. -ninductive sigma (A : Type[0]) (P : A â CProp[0]) : Type[0] â - sig_intro : âx:A.P x â sigma A P. - -interpretation "sigma" 'sigma \eta.p = (sigma ? p). - ndefinition nAx_of_Ax : Ax â nAx. #A; @ A (I ?); ##[ #a; #i; napply (Σx:A.x â ð a i); @@ -213,29 +876,82 @@ ndefinition nAx_of_Ax : Ax â nAx. ##] nqed. +nlemma Ax_nAx_equiv : + âA:Ax. âa,i. C (Ax_of_nAx (nAx_of_Ax A)) a i â C A a i ⧠+ C A a i â C (Ax_of_nAx (nAx_of_Ax A)) a i. +#A; #a; #i; @; #b; #H; +##[ ncases A in a i b H; #S; #I; #C; #a; #i; #b; #H; + nwhd in H; ncases H; #x; #E; nrewrite > E; + ncases x in E; #b; #Hb; #_; nnormalize; nassumption; +##| ncases A in a i b H; #S; #I; #C; #a; #i; #b; #H; @; + ##[ @ b; nassumption; + ##| nnormalize; @; ##] +##] +nqed. + +(*D + +We then define the inductive type of ordinals, parametrized over an axiom +set. We also attach some notations to the constructors. + +D*) + ninductive Ord (A : nAx) : Type[0] â | oO : Ord A | oS : Ord A â Ord A | oL : âa:A.âi.âf:ð a i â Ord A. Ord A. -notation "Î term 90 f" non associative with precedence 50 for @{ 'oL $f }. +notation "0" non associative with precedence 90 for @{ 'oO }. notation "x+1" non associative with precedence 50 for @{'oS $x }. +notation "Î term 90 f" non associative with precedence 50 for @{ 'oL $f }. -interpretation "ordinals Lambda" 'oL f = (oL ? ? ? f). +interpretation "ordinals Zero" 'oO = (oO ?). interpretation "ordinals Succ" 'oS x = (oS ? x). +interpretation "ordinals Lambda" 'oL f = (oL ? ? ? f). + +(*D + +The definition of `Uâ½x` is by recursion over the ordinal `x`. +We thus define a recursive function using the `nlet rec` command. +The `on x` directive tells +the system on which argument the function is (structurally) recursive. + +In the `oS` case we use a local definition to name the recursive call +since it is used twice. + +Note that Matita does not support notation in the left hand side +of a pattern match, and thus the names of the constructors have to +be spelled out verbatim. + +D*) nlet rec famU (A : nAx) (U : Ω^A) (x : Ord A) on x : Ω^A â match x with [ oO â U - | oS y â let Un â famU A U y in Un ⪠{ x | âi.ðð¦[ð x i] â Un} + | oS y â let U_n â famU A U y in U_n ⪠{ x | âi.ðð¦[ð x i] â U_n} | oL a i f â { x | âj.x â famU A U (f j) } ]. notation < "term 90 U \sub (term 90 x)" non associative with precedence 50 for @{ 'famU $U $x }. notation > "U â½ term 90 x" non associative with precedence 50 for @{ 'famU $U $x }. interpretation "famU" 'famU U x = (famU ? U x). + +(*D + +We attach as the input notation for U_x the similar `Uâ½x` where underscore, +that is a character valid for identifier names, has been replaced by `â½` that is +not. The symbol `â½` can act as a separator, and can be typed as an alternative +for `_` (i.e. pressing ALT-L after `_`). + +The notion â(U) has to be defined as the subset of elements `y` +belonging to `Uâ½x` for some `x`. Moreover, we have to define the notion +of cover between sets again, since the one defined at the beginning +of the tutorial works only for the old axiom set. + +D*) -ndefinition ord_coverage : âA:nAx.âU:Ω^A.Ω^A â λA,U.{ y | âx:Ord A. y â famU ? U x }. +ndefinition ord_coverage : âA:nAx.âU:Ω^A.Ω^A â + λA,U.{ y | âx:Ord A. y â famU ? U x }. ndefinition ord_cover_set â λc:âA:nAx.Ω^A â Ω^A.λA,C,U. ây.y â C â y â c A U. @@ -244,68 +960,201 @@ interpretation "coverage new cover" 'coverage U = (ord_coverage ? U). interpretation "new covers set" 'covers a U = (ord_cover_set ord_coverage ? a U). interpretation "new covers" 'covers a U = (mem ? (ord_coverage ? U) a). -ntheorem new_coverage_reflexive: - âA:nAx.âU:Ω^A.âa. a â U â a â U. -#A; #U; #a; #H; @ (oO A); napply H; -nqed. +(*D -nlemma ord_subset: - âA:nAx.âa:A.âi,f,U.âj:ð a i.Uâ½(f j) â Uâ½(Î f). +Before proving that this cover relation validates the reflexivity and infinity +rules, we prove this little technical lemma that is used in the proof for the +infinity rule. + +D*) + +nlemma ord_subset: âA:nAx.âa:A.âi,f,U.âj:ð a i. Uâ½(f j) â Uâ½(Î f). #A; #a; #i; #f; #U; #j; #b; #bUf; @ j; nassumption; nqed. -naxiom AC : âA,a,i,U.(âj:ð a i.âx:Ord A.ð a i j â Uâ½x) â (Σf.âj:ð a i.ð a i j â Uâ½(f j)). +(*D + +The proof of infinity uses the following form of the Axiom of Choice, +that cannot be proved inside Matita, since the existential quantifier +lives in the sort of predicative propositions while the sigma in the conclusion +lives in the sort of data types, and thus the former cannot be eliminated +to provide the witness for the second. -naxiom setoidification : - âA:nAx.âa,b:A.âU.a=b â b â U â a â U. +D*) -(*DOCBEGIN +nlemma AC_fake : âA,a,i,U. + (âj:ð a i.Σx:Ord A.ð a i j â Uâ½x) â (Σf.âj:ð a i.ð a i j â Uâ½(f j)). +#A; #a; #i; #U; #H; @; +##[ #j; ncases (H j); #x; #_; napply x; +##| #j; ncases (H j); #x; #Hx; napply Hx; ##] +nqed. -Bla Bla, +naxiom AC : âA,a,i,U. + (âj:ð a i.âx:Ord A.ð a i j â Uâ½x) â (Σf.âj:ð a i.ð a i j â Uâ½(f j)). -