X-Git-Url: http://matita.cs.unibo.it/gitweb/?a=blobdiff_plain;ds=inline;f=helm%2Fsoftware%2Fmatita%2Fnlibrary%2Ftopology%2Figft.ma;h=d6043fba2c1d413fc22626fd567e18dbc34c5779;hb=cb731a63dfd801c15047e0d18b644794ac63fe03;hp=50a62bff4ec049e749c5429ef2b16707bb44ddfd;hpb=4286d368b6d902a4b54c4cf8590c387f2cdb90ea;p=helm.git diff --git a/helm/software/matita/nlibrary/topology/igft.ma b/helm/software/matita/nlibrary/topology/igft.ma index 50a62bff4..d6043fba2 100644 --- a/helm/software/matita/nlibrary/topology/igft.ma +++ b/helm/software/matita/nlibrary/topology/igft.ma @@ -1,90 +1,638 @@ -(*DOCBEGIN +(*D + +Inductively generated formal topologies in Matita +================================================= + +This is a not so short introduction to [Matita][2], based on +the formalization of the paper + +> Between formal topology and game theory: an +> explicit solution for the conditions for an +> inductive generation of formal topologies + +by Stefano Berardi and Silvio Valentini. + +The tutorial and the formalization are by Enrico Tassi. + +The reader should be familiar with inductively generated +formal topologies and have some basic knowledge of type theory and λ-calculus. + +A considerable part of this tutorial is devoted to explain how to define +notations that resemble the ones used in the original paper. We believe +this is an important part of every formalization, not only from the aesthetic +point of view, but also from the practical point of view. Being +consistent allows to follow the paper in a pedantic way, and hopefully +to make the formalization (at least the definitions and proved +statements) readable to the author of the paper. + +The formalization uses the "new generation" version of Matita +(that will be named 1.x when finally released). +Last stable release of the "old" system is named 0.5.7; the ng system +is coexisting with the old one in all development release +(named "nightly builds" in the download page of Matita) +with a version strictly greater than 0.5.7. + +To read this tutorial in HTML format, you need a decent browser +equipped with a unicode capable font. Use the PDF format if some +symbols are not displayed correctly. + +Orienteering +------------ + +The graphical interface of Matita is composed of three windows: +the script window, on the left, is where you type; the sequent +window on the top right is where the system shows you the ongoing proof; +the error window, on the bottom right, is where the system complains. +On the top of the script window five buttons drive the processing of +the proof script. From left to right they request the system to: + +- go back to the beginning of the script +- go back one step +- go to the current cursor position +- advance one step +- advance to the end of the script + +When the system processes a command, it locks the part of the script +corresponding to the command, such that you cannot edit it anymore +(without going back). Locked parts are coloured in blue. + +The sequent window is hyper textual, i.e. you can click on symbols +to jump to their definition, or switch between different notations +for the same expression (for example, equality has two notations, +one of them makes the type of the arguments explicit). + +Everywhere in the script you can use the `ncheck (term).` command to +ask for the type a given term. If you do that in the middle of a proof, +the term is assumed to live in the current proof context (i.e. can use +variables introduced so far). + +To ease the typing of mathematical symbols, the script window +implements two unusual input facilities: + +- some TeX symbols can be typed using their TeX names, and are + automatically converted to UTF-8 characters. For a list of + the supported TeX names, see the menu: View â¹ TeX/UTF-8 Table. + Moreover some ASCII-art is understood as well, like `=>` and `->` + to mean double or single arrows. + Here we recall some of these "shortcuts": + + - â can be typed with `\forall` + - λ can be typed with `\lambda` + - â can be typed with `\def` or `:=` + - â can be typed with `\to` or `->` + +- some symbols have variants, like the ⤠relation and â¼, â°, â . + The user can cycle between variants typing one of them and then + pressing ALT-L. Note that also letters do have variants, for + example W has Ω, ð and ð, L has Î, ð, and ð, F has Φ, ⦠+ Variants are listed in the aforementioned TeX/UTF-8 table. + +The syntax of terms (and types) is the one of the λ-calculus CIC +on which Matita is based. The main syntactical difference w.r.t. +the usual mathematical notation is the function application, written +`(f x y)` in place of `f(x,y)`. + +Pressing `F1` opens the Matita manual. + +CIC (as [implemented in Matita][3]) in a nutshell +------------------------------------------------- + +CIC is a full and functional Pure Type System (all products do exist, +and their sort is is determined by the target) with an impredicative sort +Prop and a predicative sort Type. It features both dependent types and +polymorphism like the [Calculus of Constructions][4]. Proofs and terms share +the same syntax, and they can occur in types. + +The environment used for in the typing judgement can be populated with +well typed definitions or theorems, (co)inductive types validating positivity +conditions and recursive functions provably total by simple syntactical +analysis (recursive calls are allowed only on structurally smaller subterms). +Co-recursive +functions can be defined as well, and must satisfy the dual condition, i.e. +performing the recursive call only after having generated a constructor (a piece +of output). + +The CIC λ-calculus is equipped with a pattern matching construct (match) on inductive +types defined in the environment. This construct, together with the possibility to +definable total recursive functions, allows to define eliminators (or constructors) +for (co)inductive types. + +Types are compare up to conversion. Since types may depend on terms, conversion +involves β-reduction, δ-reduction (definition unfolding), ζ-reduction (local +definition unfolding), ι-reduction (pattern matching simplification), +μ-reduction (recursive function computation) and ν-reduction (co-fixpoint +computation). + +Since we are going to formalize constructive and predicative mathematics +in an intensional type theory like CIC, we try to establish some terminology. +Type is the sort of sets equipped with the `Id` equality (i.e. an intensional, +not quotiented set). + +We write `Type[i]` to mention a Type in the predicative hierarchy +of types. To ease the comprehension we will use `Type[0]` for sets, +and `Type[1]` for classes. The index `i` is just a label: constraints among +universes are declared by the user. The standard library defines + +> Type[0] < Type[1] < Type[2] + +Matita implements a variant of CIC in which constructive and predicative proposition +are distinguished from predicative data types. + + + +For every `Type[i]` there is a corresponding level of predicative +propositions `CProp[i]` (the C initial is due to historical reasons, and +stands for constructive). +A predicative proposition cannot be eliminated toward +`Type[j]` unless it holds no computational content (i.e. it is an inductive proposition +with 0 or 1 constructors with propositional arguments, like `Id` and `And` +but not like `Or`). + +The distinction between predicative propositions and predicative data types +is a peculiarity of Matita (for example in CIC as implemented by Coq they are the +same). The additional restriction of not allowing the elimination of a CProp +toward a Type makes the theory of Matita minimal in the following sense: + + + +Theorems proved in CIC as implemented in Matita can be reused in a classical +and impredicative framework (i.e. forcing Matita to collapse the hierarchy of +constructive propositions and assuming the excluded middle on them). +Alternatively, one can decide to collapse predicative propositions and +predicative data types recovering the Axiom of Choice in the sense of Martin Löf +(i.e. â really holds a witness and can be eliminated to inhabit a type). + +This implementation of CIC is the result of the collaboration with Maietti M., +Sambin G. and Valentini S. of the University of Padua. + +Formalization choices +--------------------- + +There are many different ways of formalizing the same piece of mathematics +in CIC, depending on what our interests are. There is usually a trade-off +between the possibility of reuse the formalization we did and its complexity. + +In this work, our decisions mainly regarded the following two areas + +- Axiom of Choice: controlled use or not +- Equality: Id or not + +### Axiom of Choice + +In this paper it is clear that the author is interested in using the Axiom +of Choice, thus choosing to identify â and Σ (i.e. working in the +leftmost box of the graph "Coq's CIC (work in CProp)") would be a safe decision +(that is, the author of the paper would not complain we formalized something +different from what he had in mind). + +Anyway, we may benefit from the minimality of CIC as implemented in Matita, +"asking" the type system to ensure we do no use the Axiom of Choice elsewhere +in the proof (by mistake or as a shortcut). If we identify â and Σ from the +very beginning, the system will not complain if we use the Axiom of Choice at all. +Moreover, the elimination of an inductive type (like â) is a so common operation +that the syntax chosen for the elimination command is very compact and non +informative, hard to spot for a human being +(in fact it is just two characters long!). + +We decided to formalize the whole paper without identifying +CProp and Type and assuming the Axiom of Choice as a real axiom +(i.e. a black hole with no computational content, a function with no body). + +It is clear that this approach give us full control on when/where we really use +the Axiom of Choice. But, what are we loosing? What happens to the computational +content of the proofs if the Axiom of Choice gives no content back? + +It really +depends on when we actually look at the computational content of the proof and +we "run" that program. We can extract the content and run it before or after +informing the system that our propositions are actually code (i.e. identifying +â and Σ). If we run the program before, as soon as the computation reaches the +Axiom of Choice it stops, giving no output. If we tell the system that CProp and +Type are the same, we can exhibit a body for the Axiom of Choice (i.e. a projection) +and the extracted code would compute an output. + +Note that the computational +content is there even if the Axiom of Choice is an axiom, the difference is +just that we cannot use it (the typing rules inhibit the elimination of the +existential). This is possible only thanks to the minimality of CIC as implemented +in Matita. + +### Equality + +What we have to decide here is which models we admit. The paper does not +mention quotiented sets, thus using an intensional equality is enough +to capture the intended content of the paper. Nevertheless, the formalization +cannot be reused in a concrete example where the (families of) sets +that will build the axiom set are quotiented. + +Matita gives support for setoid rewriting under a context built with +non dependent morphisms. As we will detail later, if we assume a generic +equality over the carrier of our axiom set, a non trivial inductive +construction over the ordinals has to be proved to respect extensionality +(i.e. if the input is an extensional set, also the output is). +The proof requires to rewrite under the context formed by the family of sets +`I` and `D`, that have a dependent type. Declaring them as dependently typed +morphisms is possible, but Matita does not provide an adequate support for them, +and would thus need more effort than formalizing the whole paper. + +Anyway, in a preliminary attempt of formalization, we tried the setoid approach, +reaching the end of the formalization, but we had to assume the proof +of the extensionality of the `U_x` construction (while there is no +need to assume the same property for `F_x`!). + +The current version of the formalization uses `Id`. + +The standard library and the `include` command +---------------------------------------------- + +Some basic notions, like subset, membership, intersection and union +are part of the standard library of Matita. + +These notions come with some standard notation attached to them: + +- A ⪠B can be typed with `A \cup B` +- A â© B can be typed with `A \cap B` +- A ⬠B can be typed with `A \between B` +- x â A can be typed with `x \in A` +- Ω^A, that is the type of the subsets of A, can be typed with `\Omega ^ A` + +The `include` command tells Matita to load a part of the library, +in particular the part that we will use can be loaded as follows: + +D*) + +include "sets/sets.ma". -Matita Tutorial: inductively generated formal topologies -======================================================== +(*D -Small intro... +Some basic results that we will use are also part of the sets library: -Initial setup -------------- +- subseteq\_union\_l: âA.âU,V,W:Ω^A.U â W â V â W â U ⪠V â W +- subseteq\_intersection\_r: âA.âU,V,W:Ω^A.W â U â W â V â W â U â© V -The library, inclusion of `sets/sets.ma`, notation defined: Ω^A. -Symbols (see menu: View â¹ TeX/UTF-8 Table): +Defining Axiom set +------------------ -- `Ω` can be typed \Omega -- `â` \Forall -- `λ` \lambda -- `â` \def -- `â` -> +A set of axioms is made of a set `S`, a family of sets `I` and a +family `C` of subsets of `S` indexed by elements `a` of `S` +and elements of `I(a)`. -Virtuals, ALT-L, for example I changes into ð, finally ð. +It is desirable to state theorems like "for every set of axioms, â¦" +without explicitly mentioning S, I and C. To do that, the three +components have to be grouped into a record (essentially a dependently +typed tuple). The system is able to generate the projections +of the record automatically, and they are named as the fields of +the record. So, given an axiom set `A` we can obtain the set +with `S A`, the family of sets with `I A` and the family of subsets +with `C A`. -DOCEND*) +D*) -include "sets/sets.ma". +nrecord Ax : Type[1] â { + S :> Type[0]; + I : S â Type[0]; + C : âa:S. I a â Ω^S +}. -(*DOCBEGIN +(*D -Axiom set ---------- +Forget for a moment the `:>` that will be detailed later, and focus on +the record definition. It is made of a list of pairs: a name, followed +by `:` and the its type. It is a dependently typed tuple, thus +already defined names (fields) can be used in the types that follow. -records, ... +Note that the field `S` was declared with `:>` instead of a simple `:`. +This declares the `S` projection to be a coercion. A coercion is +a "cast" function the system automatically inserts when it is needed. +In that case, the projection `S` has type `Ax â setoid`, and whenever +the expected type of a term is `setoid` while its type is `Ax`, the +system inserts the coercion around it, to make the whole term well typed. -DOCEND*) +When formalizing an algebraic structure, declaring the carrier as a +coercion is a common practice, since it allows to write statements like -nrecord Ax : Type[1] â { - S:> setoid; - I: S â Type[0]; - C: âa:S. I a â Ω ^ S -}. + âG:Group.âx:G.x * x^-1 = 1 + +The quantification over `x` of type `G` is ill-typed, since `G` is a term +(of type `Group`) and thus not a type. Since the carrier projection +`carr` is a coercion, that maps a `Group` into the type of +its elements, the system automatically inserts `carr` around `G`, +obtaining `â¦âx: carr G.â¦`. + +Coercions are hidden by the system when it displays a term. +In this particular case, the coercion `S` allows to write (and read): + + âA:Ax.âa:A.⦠+ +Since `A` is not a type, but it can be turned into a `setoid` by `S` +and a `setoid` can be turned into a type by its `carr` projection, the +composed coercion `carr â S` is silently inserted. + +Implicit arguments +------------------ + +Something that is not still satisfactory, is that the dependent type +of `I` and `C` are abstracted over the Axiom set. To obtain the +precise type of a term, you can use the `ncheck` command as follows. + +D*) -(*HIDE*) -notation "ð \sub( â¨aâ© )" non associative with precedence 70 for @{ 'I $a }. -notation "ð \sub ( â¨a,\emsp iâ© )" non associative with precedence 70 for @{ 'C $a $i }. -(*UNHIDE*) +(** ncheck I. *) (* shows: âA:Ax.A â Type[0] *) +(** ncheck C. *) (* shows: âA:Ax.âa:A.A â I A a â Ω^A *) -(*DOCBEGIN +(*D -Notation for the axiom set --------------------------- +One would like to write `I a` and not `I A a` under a context where +`A` is an axiom set and `a` has type `S A` (or thanks to the coercion +mechanism simply `A`). In Matita, a question mark represents an implicit +argument, i.e. a missing piece of information the system is asked to +infer. Matita performs Hindley-Milner-style type inference, thus writing +`I ? a` is enough: since the second argument of `I` is typed by the +first one, the first (omitted) argument can be inferred just +computing the type of `a` (that is `A`). -bla bla +D*) -DOCEND*) +(** ncheck (âA:Ax.âa:A.I ? a). *) (* shows: âA:Ax.âa:A.I A a *) + +(*D + +This is still not completely satisfactory, since you have always to type +`?`; to fix this minor issue we have to introduce the notational +support built in Matita. + +Notation for I and C +-------------------- + +Matita is quipped with a quite complex notational support, +allowing the user to define and use mathematical notations +([From Notation to Semantics: There and Back Again][1]). + +Since notations are usually ambiguous (e.g. the frequent overloading of +symbols) Matita distinguishes between the term level, the +content level, and the presentation level, allowing multiple +mappings between the content and the term level. + +The mapping between the presentation level (i.e. what is typed on the +keyboard and what is displayed in the sequent window) and the content +level is defined with the `notation` command. When followed by +`>`, it defines an input (only) notation. + +D*) notation > "ð term 90 a" non associative with precedence 70 for @{ 'I $a }. notation > "ð term 90 a term 90 i" non associative with precedence 70 for @{ 'C $a $i }. +(*D + +The first notation defines the writing `ð a` where `a` is a generic +term of precedence 90, the maximum one. This high precedence forces +parentheses around any term of a lower precedence. For example `ð x` +would be accepted, since identifiers have precedence 90, but +`ð f x` would be interpreted as `(ð f) x`. In the latter case, parentheses +have to be put around `f x`, thus the accepted writing would be `ð (f x)`. + +To obtain the `ð` is enough to type `I` and then cycle between its +similar symbols with ALT-L. The same for `ð`. Notations cannot use +regular letters or the round parentheses, thus their variants (like the +bold ones) have to be used. + +The first notation associates `ð a` with `'I $a` where `'I` is a +new content element to which a term `$a` is passed. + +Content elements have to be interpreted, and possibly multiple, +incompatible, interpretations can be defined. + +D*) + interpretation "I" 'I a = (I ? a). interpretation "C" 'C a i = (C ? a i). -(*DOCBEGIN +(*D -The first definition --------------------- +The `interpretation` command allows to define the mapping between +the content level and the terms level. Here we associate the `I` and +`C` projections of the Axiom set record, where the Axiom set is an implicit +argument `?` to be inferred by the system. -![bla bla][def-fish-rec] +Interpretation are bi-directional, thus when displaying a term like +`C _ a i`, the system looks for a presentation for the content element +`'C a i`. -DOCEND*) +D*) -ndefinition cover_set â λc:âA:Ax.Ω^A â A â CProp[0].λA,C,U. - ây.y â C â c A U y. +notation < "ð \sub( â¨aâ© )" non associative with precedence 70 for @{ 'I $a }. +notation < "ð \sub( â¨a,\emsp iâ© )" non associative with precedence 70 for @{ 'C $a $i }. + +(*D + +For output purposes we can define more complex notations, for example +we can put bold parentheses around the arguments of `ð` and `ð`, decreasing +the size of the arguments and lowering their baseline (i.e. putting them +as subscript), separating them with a comma followed by a little space. + +The first (technical) definition +-------------------------------- + +Before defining the cover relation as an inductive predicate, one +has to notice that the infinity rule uses, in its hypotheses, the +cover relation between two subsets, while the inductive predicate +we are going to define relates an element and a subset. + + a â U i â I(a) C(a,i) â U + (reflexivity) â¼â¼â¼â¼â¼â¼â¼â¼â¼ (infinity) â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼ + a â U a â U + +An option would be to unfold the definition of cover between subsets, +but we prefer to define the abstract notion of cover between subsets +(so that we can attach a (ambiguous) notation to it). + +Anyway, to ease the understanding of the definition of the cover relation +between subsets, we first define the inductive predicate unfolding the +definition, and we later refine it with. + +D*) + +ninductive xcover (A : Ax) (U : Ω^A) : A â CProp[0] â +| xcreflexivity : âa:A. a â U â xcover A U a +| xcinfinity : âa:A.âi:ð a. (ây.y â ð a i â xcover A U y) â xcover A U a. + +(*D + +We defined the xcover (x will be removed in the final version of the +definition) as an inductive predicate. The arity of the inductive +predicate has to be carefully analyzed: + +> (A : Ax) (U : Ω^A) : A â CProp[0] + +The syntax separates with `:` abstractions that are fixed for every +constructor (introduction rule) and abstractions that can change. In that +case the parameter `U` is abstracted once and for all in front of every +constructor, and every occurrence of the inductive predicate is applied to +`U` in a consistent way. Arguments abstracted on the right of `:` are not +constant, for example the xcinfinity constructor introduces `a â U`, +but under the assumption that (for every y) `y â U`. In that rule, the left +had side of the predicate changes, thus it has to be abstracted (in the arity +of the inductive predicate) on the right of `:`. + +The intuition Valentini suggests is that we are defining the unary predicate +"being covered by U" (i.e. `_ â U`) and not "being covered" (i.e. `_ â _`). +Unluckily, the syntax of Matita forces us to abstract `U` first, and +we will make it the second argument of the predicate using +the notational support Matita offers. + +D*) + +(** ncheck xcreflexivity. *) (* shows: âA:Ax.âU:Ω^A.âa:A.aâU â xcover A U a *) + +(*D + +We want now to abstract out `(ây.y â ð a i â xcover A U y)` and define +a notion `cover_set` to which a notation `ð a i â U` can be attached. + +This notion has to be abstracted over the cover relation (whose +type is the arity of the inductive `xcover` predicate just defined). + +Then it has to be abstracted over the arguments of that cover relation, +i.e. the axiom set and the set `U`, and the subset (in that case `ð a i`) +sitting on the left hand side of `â`. + +D*) + +ndefinition cover_set : + âcover: âA:Ax.Ω^A â A â CProp[0]. âA:Ax.âC,U:Ω^A. CProp[0] +â + λcover. λA, C,U. ây.y â C â cover A U y. + +(*D + +The `ndefinition` command takes a name, a type and body (of that type). +The type can be omitted, and in that case it is inferred by the system. +If the type is given, the system uses it to infer implicit arguments +of the body. In that case all types are left implicit in the body. + +We now define the notation `a â b`. Here the keywork `hvbox` +and `break` tell the system how to wrap text when it does not +fit the screen (they can be safely ignored for the scope of +this tutorial). We also add an interpretation for that notation, +where the (abstracted) cover relation is implicit. The system +will not be able to infer it from the other arguments `C` and `U` +and will thus prompt the user for it. This is also why we named this +interpretation `covers set temp`: we will later define another +interpretation in which the cover relation is the one we are going to +define. + +D*) -(* a \ltri b *) notation "hvbox(a break â b)" non associative with precedence 45 for @{ 'covers $a $b }. interpretation "covers set temp" 'covers C U = (cover_set ?? C U). +(*D + +The cover relation +------------------ + +We can now define the cover relation using the `â` notation for +the premise of infinity. + +D*) + ninductive cover (A : Ax) (U : Ω^A) : A â CProp[0] â -| creflexivity : âa:A. a â U â cover ? U a -| cinfinity : âa:A.âi:ð a. ð a i â U â cover ? U a. +| creflexivity : âa. a â U â cover A U a +| cinfinity : âa. âi. ð a i â U â cover A U a. +(** screenshot "cover". *) napply cover; nqed. +(*D + +Note that the system accepts the definition +but prompts the user for the relation the `cover_set` notion is +abstracted on. + + + +The horizontal line separates the hypotheses from the conclusion. +The `napply cover` command tells the system that the relation +it is looking for is exactly our first context entry (i.e. the inductive +predicate we are defining, up to α-conversion); while the `nqed` command +ends a definition or proof. + +We can now define the interpretation for the cover relation between an +element and a subset first, then between two subsets (but this time +we fix the relation `cover_set` is abstracted on). + +D*) + interpretation "covers" 'covers a U = (cover ? U a). interpretation "covers set" 'covers a U = (cover_set cover ? a U). +(*D + +We will proceed similarly for the fish relation, but before going +on it is better to give a short introduction to the proof mode of Matita. +We define again the `cover_set` term, but this time we build +its body interactively. In the λ-calculus Matita is based on, CIC, proofs +and terms share the same syntax, and it is thus possible to use the +commands devoted to build proof term also to build regular definitions. +A tentative semantics for the proof mode commands (called tactics) +in terms of sequent calculus rules are given in the +appendix. + +D*) + +ndefinition xcover_set : + âc: âA:Ax.Ω^A â A â CProp[0]. âA:Ax.âC,U:Ω^A. CProp[0]. + (** screenshot "xcover-set-1". *) +#cover; #A; #C; #U; (** screenshot "xcover-set-2". *) +napply (ây:A.y â C â ?); (** screenshot "xcover-set-3". *) +napply cover; (** screenshot "xcover-set-4". *) +##[ napply A; +##| napply U; +##| napply y; +##] +nqed. + +(*D[xcover-set-1] +The system asks for a proof of the full statement, in an empty context. + +The `#` command is the â-introduction rule, it gives a name to an +assumption putting it in the context, and generates a λ-abstraction +in the proof term. + +D[xcover-set-2] +We have now to provide a proposition, and we exhibit it. We left +a part of it implicit; since the system cannot infer it it will +ask for it later. +Note that the type of `ây:A.y â C â ?` is a proposition +whenever `?` is a proposition. + +D[xcover-set-3] +The proposition we want to provide is an application of the +cover relation we have abstracted in the context. The command +`napply`, if the given term has not the expected type (in that +case it is a product versus a proposition) it applies it to as many +implicit arguments as necessary (in that case `? ? ?`). + +D[xcover-set-4] +The system will now ask in turn the three implicit arguments +passed to cover. The syntax `##[` allows to start a branching +to tackle every sub proof individually, otherwise every command +is applied to every subproof. The command `##|` switches to the next +subproof and `##]` ends the branching. +D*) + +(*D + +The fish relation +----------------- + +The definition of fish works exactly the same way as for cover, except +that it is defined as a coinductive proposition. +D*) + ndefinition fish_set â λf:âA:Ax.Ω^A â A â CProp[0]. λA,U,V. âa.a â V ⧠f A U a. @@ -103,30 +651,156 @@ nqed. interpretation "fish set" 'fish A U = (fish_set fish ? U A). interpretation "fish" 'fish a U = (fish ? U a). +(*D + +Introduction rule for fish +--------------------------- + +Matita is able to generate elimination rules for inductive types +D*) + +(** ncheck cover_rect_CProp0. *) + +(*D + +but not introduction rules for the coinductive case. + + P â U (âx,j.x â P â C(x,j) ⬠P) a â P + (fish intro) â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼â¼ + a â U + +We thus have to define the introduction rule for fish by co-recursion. +Here we again use the proof mode of Matita to exhibit the body of the +corecursive function. + +D*) + nlet corec fish_rec (A:Ax) (U: Ω^A) (P: Ω^A) (H1: P â U) - (H2: âa:A. a â P â âj: ð a. ð a j ⬠P): - âa:A. âp: a â P. a â U â ?. -#a; #p; napply cfish; (** screenshot "def-fish-rec". *) -##[ napply H1; napply p; -##| #i; ncases (H2 a p i); #x; *; #xC; #xP; @; ##[napply x] - @; ##[ napply xC ] napply (fish_rec ? U P); nassumption; + (H2: âa:A. a â P â âj: ð a. ð a j ⬠P): âa:A. âp: a â P. a â U â ?. + (** screenshot "def-fish-rec-1". *) +#a; #a_in_P; napply cfish; (** screenshot "def-fish-rec-2". *) +##[ nchange in H1 with (âb.bâP â bâU); (** screenshot "def-fish-rec-2-1". *) + napply H1; (** screenshot "def-fish-rec-3". *) + nassumption; +##| #i; ncases (H2 a a_in_P i); (** screenshot "def-fish-rec-5". *) + #x; *; #xC; #xP; (** screenshot "def-fish-rec-5-1". *) + @; (** screenshot "def-fish-rec-6". *) + ##[ napply x + ##| @; (** screenshot "def-fish-rec-7". *) + ##[ napply xC; + ##| napply (fish_rec ? U P); (** screenshot "def-fish-rec-9". *) + nassumption; + ##] + ##] ##] nqed. -notation "âU" non associative with precedence 55 -for @{ 'coverage $U }. +(*D +D[def-fish-rec-1] +Note the first item of the context, it is the corecursive function we are +defining. This item allows to perform the recursive call, but we will be +allowed to do such call only after having generated a constructor of +the fish coinductive type. + +We introduce `a` and `p`, and then return the fish constructor `cfish`. +Since the constructor accepts two arguments, the system asks for them. + +D[def-fish-rec-2] +The first one is a proof that `a â U`. This can be proved using `H1` and `p`. +With the `nchange` tactic we change `H1` into an equivalent form (this step +can be skipped, since the system would be able to unfold the definition +of inclusion by itself) + +D[def-fish-rec-2-1] +It is now clear that `H1` can be applied. Again `napply` adds two +implicit arguments to `H1 ? ?`, obtaining a proof of `? â U` given a proof +that `? â P`. Thanks to unification, the system understands that `?` is actually +`a`, and it asks a proof that `a â P`. + +D[def-fish-rec-3] +The `nassumption` tactic looks for the required proof in the context, and in +that cases finds it in the last context position. + +We move now to the second branch of the proof, corresponding to the second +argument of the `cfish` constructor. + +We introduce `i` and then we destruct `H2 a p i`, that being a proof +of an overlap predicate, give as an element and a proof that it is +both in `ð a i` and `P`. + +D[def-fish-rec-5] +We then introduce `x`, break the conjunction (the `*;` command is the +equivalent of `ncases` but operates on the first hypothesis that can +be introduced). We then introduce the two sides of the conjunction. + +D[def-fish-rec-5-1] +The goal is now the existence of a point in `ð a i` fished by `U`. +We thus need to use the introduction rule for the existential quantifier. +In CIC it is a defined notion, that is an inductive type with just +one constructor (one introduction rule) holding the witness and the proof +that the witness satisfies a proposition. + +> ncheck Ex. + +Instead of trying to remember the name of the constructor, that should +be used as the argument of `napply`, we can ask the system to find by +itself the constructor name and apply it with the `@` tactic. +Note that some inductive predicates, like the disjunction, have multiple +introduction rules, and thus `@` can be followed by a number identifying +the constructor. + +D[def-fish-rec-6] +After choosing `x` as the witness, we have to prove a conjunction, +and we again apply the introduction rule for the inductively defined +predicate `â§`. + +D[def-fish-rec-7] +The left hand side of the conjunction is trivial to prove, since it +is already in the context. The right hand side needs to perform +the co-recursive call. + +D[def-fish-rec-9] +The co-recursive call needs some arguments, but all of them are +in the context. Instead of explicitly mention them, we use the +`nassumption` tactic, that simply tries to apply every context item. + +D*) + +(*D + +Subset of covered/fished points +------------------------------- + +We now have to define the subset of `S` of points covered by `U`. +We also define a prefix notation for it. Remember that the precedence +of the prefix form of a symbol has to be higher than the precedence +of its infix form. + +D*) ndefinition coverage : âA:Ax.âU:Ω^A.Ω^A â λA,U.{ a | a â U }. +notation "âU" non associative with precedence 55 for @{ 'coverage $U }. + interpretation "coverage cover" 'coverage U = (coverage ? U). +(*D + +Here we define the equation characterizing the cover relation. +Even if it is not part of the paper, we proved that `â(U)` is +the minimum solution for +such equation, the interested reader should be able to reply the proof +with Matita. + +D*) + ndefinition cover_equation : âA:Ax.âU,X:Ω^A.CProp[0] â λA,U,X. âa.a â X â (a â U ⨠âi:ð a.ây.y â ð a i â y â X). ntheorem coverage_cover_equation : âA,U. cover_equation A U (âU). #A; #U; #a; @; #H; -##[ nelim H; #b; (* manca clear *) +##[ nelim H; #b; ##[ #bU; @1; nassumption; ##| #i; #CaiU; #IH; @2; @ i; #c; #cCbi; ncases (IH ? cCbi); ##[ #E; @; napply E; @@ -144,6 +818,14 @@ ntheorem coverage_min_cover_equation : ##] nqed. +(*D + +We similarly define the subset of points "fished" by `F`, the +equation characterizing `â(F)` and prove that fish is +the biggest solution for such equation. + +D*) + notation "âF" non associative with precedence 55 for @{ 'fished $F }. @@ -154,31 +836,46 @@ interpretation "fished fish" 'fished F = (fished ? F). ndefinition fish_equation : âA:Ax.âF,X:Ω^A.CProp[0] â λA,F,X. âa. a â X â a â F ⧠âi:ð a.ây.y â ð a i ⧠y â X. -ntheorem fised_fish_equation : âA,F. fish_equation A F (âF). -#A; #F; #a; @; (* bug, fare case sotto diverso da farlo sopra *) #H; ncases H; +ntheorem fished_fish_equation : âA,F. fish_equation A F (âF). +#A; #F; #a; @; (* *; non genera outtype che lega a *) #H; ncases H; ##[ #b; #bF; #H2; @ bF; #i; ncases (H2 i); #c; *; #cC; #cF; @c; @ cC; napply cF; ##| #aF; #H1; @ aF; napply H1; ##] nqed. -ntheorem fised_max_fish_equation : âA,F,G. fish_equation A F G â G â âF. +ntheorem fished_max_fish_equation : âA,F,G. fish_equation A F G â G â âF. #A; #F; #G; #H; #a; #aG; napply (fish_rec ⦠aG); #b; ncases (H b); #H1; #_; #bG; ncases (H1 bG); #E1; #E2; nassumption; nqed. -nrecord nAx : Type[2] â { - nS:> setoid; (*Type[0];*) +(*D + +Part 2, the new set of axioms +----------------------------- + +Since the name of defined objects (record included) has to be unique +within the same file, we prefix every field name +in the new definition of the axiom set with `n`. + +D*) + +nrecord nAx : Type[1] â { + nS:> Type[0]; nI: nS â Type[0]; nD: âa:nS. nI a â Type[0]; nd: âa:nS. âi:nI a. nD a i â nS }. -(* -TYPE f A â B, g : B â A, f â g = id, g â g = id. +(*D -a = b â I a = I b -*) +We again define a notation for the projections, making the +projected record an implicit argument. Note that, since we already have +a notation for `ð`, we just add another interpretation for it. The +system, looking at the argument of `ð`, will be able to choose +the correct interpretation. + +D*) notation "ð \sub ( â¨a,\emsp iâ© )" non associative with precedence 70 for @{ 'D $a $i }. notation "ð \sub ( â¨a,\emsp i,\emsp jâ© )" non associative with precedence 70 for @{ 'd $a $i $j}. @@ -190,22 +887,79 @@ interpretation "D" 'D a i = (nD ? a i). interpretation "d" 'd a i j = (nd ? a i j). interpretation "new I" 'I a = (nI ? a). +(*D + +The first result the paper presents to motivate the new formulation +of the axiom set is the possibility to define and old axiom set +starting from a new one and vice versa. The key definition for +such construction is the image of d(a,i). +The paper defines the image as + +> Im[d(a,i)] = { d(a,i,j) | j : D(a,i) } + +but this not so formal notation poses some problems. The image is +often used as the left hand side of the â predicate + +> Im[d(a,i)] â V + +Of course this writing is interpreted by the authors as follows + +> âj:D(a,i). d(a,i,j) â V + +If we need to use the image to define `ð ` (a subset of `S`) we are obliged to +form a subset, i.e. to place a single variable `{ here | ⦠}` of type `S`. + +> Im[d(a,i)] = { y | âj:D(a,i). y = d(a,i,j) } + +This poses no theoretical problems, since `S` is a Type and thus +equipped with the `Id` equality. If `S` was a setoid, here the equality +would have been the one of the setoid. + +Unless we define two different images, one for stating that the image is â of +something and another one to define `ð`, we end up using always the latter. +Thus the statement `Im[d(a,i)] â V` unfolds to + +> âx:S. ( âj.x = d(a,i,j) ) â x â V + +That, up to rewriting with the equation defining `x`, is what we mean. +Since we decided to use `Id` the rewriting always work (the elimination +principle for `Id` is Leibnitz's equality, that is quantified over +the context. + +The problem that arises if we decide to make `S` a setoid is that +`V` has to be extensional w.r.t. the equality of `S` (i.e. the characteristic +functional proposition has to quotient its input with a relation bigger +than the one of `S`. + +> âx,y:S. x = y â x â V â y â V + +If `V` is a complex construction, the proof may not be trivial. + +D*) + +include "logic/equality.ma". + ndefinition image â λA:nAx.λa:A.λi. { x | âj:ð a i. x = ð a i j }. notation > "ðð¦ [ð term 90 a term 90 i]" non associative with precedence 70 for @{ 'Im $a $i }. -notation "ðð¦ [ð \sub ( â¨a,\emsp iâ© )]" non associative with precedence 70 for @{ 'Im $a $i }. +notation < "ðð¦ [ð \sub ( â¨a,\emsp iâ© )]" non associative with precedence 70 for @{ 'Im $a $i }. interpretation "image" 'Im a i = (image ? a i). +(*D + +Thanks to our definition of image, we can define a function mapping a +new axiom set to an old one and vice versa. Note that in the second +definition, when we give the `ð` component, the projection of the +Σ-type is inlined (constructed on the fly by `*;`) +while in the paper it was named `fst`. + +D*) + ndefinition Ax_of_nAx : nAx â Ax. #A; @ A (nI ?); #a; #i; napply (ðð¦ [ð a i]); nqed. -ninductive sigma (A : Type[0]) (P : A â CProp[0]) : Type[0] â - sig_intro : âx:A.P x â sigma A P. - -interpretation "sigma" 'sigma \eta.p = (sigma ? p). - ndefinition nAx_of_Ax : Ax â nAx. #A; @ A (I ?); ##[ #a; #i; napply (Σx:A.x â ð a i); @@ -213,29 +967,128 @@ ndefinition nAx_of_Ax : Ax â nAx. ##] nqed. +(*D + +We now prove that the two function above form a retraction pair for +the `C` component of the axiom set. To prove that we face a little +problem since CIC is not equipped with η-conversion. This means that +the followin does not hold (where `A` is an axiom set). + +> A = (S A, I A, C A) + +This can be proved only under a pattern mach over `A`, that means +that the resulting computation content of the proof is a program +that computes something only if `A` is a concrete axiom set. + +To state the lemma we have to drop notation, and explicitly +give the axiom set in input to the `C` projection. + +D*) + +nlemma Ax_nAx_equiv : + âA:Ax. âa,i. C (Ax_of_nAx (nAx_of_Ax A)) a i â C A a i ⧠+ C A a i â C (Ax_of_nAx (nAx_of_Ax A)) a i. +#A; #a; #i; @; #b; #H; (** screenshot "retr-1". *) +##[ ncases A in a i b H; #S; #I; #C; #a; #i; #b; #H;(** screenshot "retr-2". *) + nchange in a with S; nwhd in H; (** screenshot "retr-3". *) + ncases H; #x; #E; nrewrite > E; nwhd in x; (** screenshot "retr-4". *) + ncases x; #b; #Hb; nnormalize; nassumption; +##| ncases A in a i b H; #S; #I; #C; #a; #i; #b; #H; @; + ##[ @ b; nassumption; + ##| nnormalize; @; ##] +##] +nqed. + +(*D +D[retr-1] +Look for example the type of `a`. The command `nchange in a with A` +would fail because of the missing η-conversion rule. We have thus +to pattern match over `A` and introduce its pieces. + +D[retr-2] +Now the system accepts that the type of `a` is the fist component +of the axiom set, now called `S`. Unfolding definitions in `H` we discover +there is still some work to do. + +D[retr-3] +To use the equation defining `b` we have to eliminate `H`. Unfolding +definitions in `x` tell us there is still something to do. The `nrewrite` +tactic is a shortcut for the elimination principle of the equality. +It accepts an additional argument `<` or `>` to rewrite left-to-right +or right-to-left. + +D[retr-4] +We defined `ð` to be the first projection of `x`, thus we have to +eliminate `x` to actually compute `ð`. + +The remaining part of the proof it not interesting and poses no +new problems. + +D*) + +(*D + +We then define the inductive type of ordinals, parametrized over an axiom +set. We also attach some notations to the constructors. + +D*) + ninductive Ord (A : nAx) : Type[0] â | oO : Ord A | oS : Ord A â Ord A | oL : âa:A.âi.âf:ð a i â Ord A. Ord A. -notation "Î term 90 f" non associative with precedence 50 for @{ 'oL $f }. +notation "0" non associative with precedence 90 for @{ 'oO }. notation "x+1" non associative with precedence 50 for @{'oS $x }. +notation "Î term 90 f" non associative with precedence 50 for @{ 'oL $f }. -interpretation "ordinals Lambda" 'oL f = (oL ? ? ? f). +interpretation "ordinals Zero" 'oO = (oO ?). interpretation "ordinals Succ" 'oS x = (oS ? x). +interpretation "ordinals Lambda" 'oL f = (oL ? ? ? f). + +(*D + +The definition of `Uâ½x` is by recursion over the ordinal `x`. +We thus define a recursive function using the `nlet rec` command. +The `on x` directive tells +the system on which argument the function is (structurally) recursive. + +In the `oS` case we use a local definition to name the recursive call +since it is used twice. + +Note that Matita does not support notation in the left hand side +of a pattern match, and thus the names of the constructors have to +be spelled out verbatim. + +D*) nlet rec famU (A : nAx) (U : Ω^A) (x : Ord A) on x : Ω^A â match x with [ oO â U - | oS y â let Un â famU A U y in Un ⪠{ x | âi.ðð¦[ð x i] â Un} + | oS y â let U_n â famU A U y in U_n ⪠{ x | âi.ðð¦[ð x i] â U_n} | oL a i f â { x | âj.x â famU A U (f j) } ]. notation < "term 90 U \sub (term 90 x)" non associative with precedence 50 for @{ 'famU $U $x }. notation > "U â½ term 90 x" non associative with precedence 50 for @{ 'famU $U $x }. interpretation "famU" 'famU U x = (famU ? U x). + +(*D + +We attach as the input notation for U_x the similar `Uâ½x` where underscore, +that is a character valid for identifier names, has been replaced by `â½` that is +not. The symbol `â½` can act as a separator, and can be typed as an alternative +for `_` (i.e. pressing ALT-L after `_`). + +The notion â(U) has to be defined as the subset of elements `y` +belonging to `Uâ½x` for some `x`. Moreover, we have to define the notion +of cover between sets again, since the one defined at the beginning +of the tutorial works only for the old axiom set. + +D*) -ndefinition ord_coverage : âA:nAx.âU:Ω^A.Ω^A â λA,U.{ y | âx:Ord A. y â famU ? U x }. +ndefinition ord_coverage : âA:nAx.âU:Ω^A.Ω^A â + λA,U.{ y | âx:Ord A. y â famU ? U x }. ndefinition ord_cover_set â λc:âA:nAx.Ω^A â Ω^A.λA,C,U. ây.y â C â y â c A U. @@ -244,68 +1097,199 @@ interpretation "coverage new cover" 'coverage U = (ord_coverage ? U). interpretation "new covers set" 'covers a U = (ord_cover_set ord_coverage ? a U). interpretation "new covers" 'covers a U = (mem ? (ord_coverage ? U) a). -ntheorem new_coverage_reflexive: - âA:nAx.âU:Ω^A.âa. a â U â a â U. -#A; #U; #a; #H; @ (oO A); napply H; -nqed. +(*D + +Before proving that this cover relation validates the reflexivity and infinity +rules, we prove this little technical lemma that is used in the proof for the +infinity rule. -nlemma ord_subset: - âA:nAx.âa:A.âi,f,U.âj:ð a i.Uâ½(f j) â Uâ½(Î f). +D*) + +nlemma ord_subset: âA:nAx.âa:A.âi,f,U.âj:ð a i. Uâ½(f j) â Uâ½(Î f). #A; #a; #i; #f; #U; #j; #b; #bUf; @ j; nassumption; nqed. -naxiom AC : âA,a,i,U.(âj:ð a i.âx:Ord A.ð a i j â Uâ½x) â (Σf.âj:ð a i.ð a i j â Uâ½(f j)). +(*D + +The proof of infinity uses the following form of the Axiom of Choice, +that cannot be proved inside Matita, since the existential quantifier +lives in the sort of predicative propositions while the sigma in the conclusion +lives in the sort of data types, and thus the former cannot be eliminated +to provide the witness for the second. + +D*) + +naxiom AC : âA,a,i,U. + (âj:ð a i.âx:Ord A.ð a i j â Uâ½x) â (Σf.âj:ð a i.ð a i j â Uâ½(f j)). -naxiom setoidification : - âA:nAx.âa,b:A.âU.a=b â b â U â a â U. +(*D -(*DOCBEGIN +Note that, if we will decide later to identify â and Σ, `AC` is +trivially provable + +D*) + +nlemma AC_exists_is_sigma : âA,a,i,U. + (âj:ð a i.Σx:Ord A.ð a i j â Uâ½x) â (Σf.âj:ð a i.ð a i j â Uâ½(f j)). +#A; #a; #i; #U; #H; @; +##[ #j; ncases (H j); #x; #_; napply x; +##| #j; ncases (H j); #x; #Hx; napply Hx; ##] +nqed. -Bla Bla, +(*D -