2 Coinductive Types and Predicates
5 include "basics/lists/list.ma".
6 include "chapter12.ma".
8 (* The only primitive data types of Matita are dependent products and universes.
9 So far every other user defined data type has been an inductive type. An
10 inductive type is declared by giving its list of constructors (or introduction
11 rules in the case of predicates). An inhabitant of an inductive type is obtained
12 composing together a finite number of constructors and data of other types,
13 according to the type of the constructors. Therefore all inhabitants of inductive
14 types are essentially finite objects. Natural numbers, lists, trees, states of
15 a DFA, letters of an alphabet are all finite and can be defined inductively.
17 If you think of an inhabitant of an inductive type as a tree of a certain shape,
18 whose nodes are constructors, the only trees can be represented are trees of
19 finite height. Note, however, that it is possible to have trees of infinite
20 width by putting in the argument of a constructor of a type I an enumeration
21 of elements of I (e.g. ℕ → I). *)
23 (* Example of an infinitely branching tree of elements of type A stored in
25 inductive infbrtree (A: Type[0]) : Type[0] ≝
27 | Node: A → (ℕ → infbrtree A) → infbrtree A.
29 (* Example: the tree of natural numbers whose root holds 0 and has as children
31 example infbrtree_ex ≝ Node ℕ 0 (λn. Node ℕ (S n) (λ_.Nil ℕ)).
33 (*** Infinite data types via functions ***)
35 (* In mathematics and less frequently in computer science there is the need to
36 also represent and manipulate types of infinite objects. Typical examples are:
37 sequences, real numbers (a special case of sequences), data streams (e.g. as
38 read from a network interface), traces of diverging computations of a program,
39 etc. One possible representation, used in mathematics since a long time, is
40 to describe an infinite object by means of an infinite collection of
41 approximations (also called observations). Often, the infinite collection can
42 be structured in a sequence, identified as a function from the domain of natural
45 (* Example 1: sequences of elements of type A *)
46 definition seq : Type[0] → Type[0] ≝ λA:Type[0]. ℕ → A.
48 (* Example 2: Real numbers as Cauchy sequences and their addition. *)
49 (* First we axiomatize the relevant properties of rational numbers. *)
51 axiom Qdist: Q → Q → Q.
52 axiom symmetric_Qdist: ∀x,y. Qdist x y = Qdist y x.
53 axiom Qleq: Q → Q → Prop.
54 interpretation "Qleq" 'leq x y = (Qleq x y).
55 axiom transitive_Qleq: transitive … Qleq.
56 axiom Qplus: Q → Q → Q.
57 interpretation "Qplus" 'plus x y = (Qplus x y).
59 ∀qa1,qb1,qa2,qb2. qa1 ≤ qb1 → qa2 ≤ qb2 → qa1 + qa2 ≤ qb1 + qb2.
61 ∀qa1,qb1,qa2,qb2. Qdist (qa1 + qa2) (qb1 + qb2) ≤ Qdist qa1 qb1 + Qdist qa2 qb2.
63 axiom Qplus_Qhalve_Qhalve: ∀q. Qhalve q + Qhalve q = q.
64 axiom triangular_Qdist: ∀x,y,z. Qdist x z ≤ Qdist x y + Qdist y z.
66 (* A sequence of rationals. *)
67 definition Qseq: Type[0] ≝ seq Q.
69 (* The Cauchy property *)
70 definition Cauchy: Qseq → Prop ≝
71 λR:Qseq. ∀eps. ∃n. ∀i,j. n ≤ i → n ≤ j → Qdist (R i) (R j) ≤ eps.
73 (* A real number is an equivalence class of Cauchy sequences. In type theory
74 we can define R as a setoid whose carrier R_ is the type of Cauchy sequences. *)
80 (* Two sequences are equal when for every epsilon they are eventually pointwise
81 closer than epsilon. *)
82 definition R : setoid ≝
84 (mk_equivalence_relation …
85 (λr1,r2:R_. ∀eps. ∃n. ∀i. n ≤ i → Qdist (r r1 i) (r r2 i) ≤ eps) …).
86 [ (* transitivity *) #x #y #z #H1 #H2 #eps cases (H1 (Qhalve eps)) #i -H1 #H1
87 cases (H2 (Qhalve eps)) #j -H2 #H2 %{(max i j)} #l #Hmax
88 <(Qplus_Qhalve_Qhalve eps) @transitive_Qleq
89 [3: @(Qleq_Qplus … (H1 l …) (H2 l …)) /2 by le_maxl,le_maxr/
91 | (* symmetry *) #x #y #H #eps cases (H eps) /3 by ex_intro/
92 | (* reflexivity *) #x #eps cases (isCauchy x eps) /3 by ex_intro/ ]
95 unification hint 0 ≔ ;
97 (* ---------------------------------------- *) ⊢
100 (* The following coercion is used to write r n to extract the n-th approximation
101 from the real number r *)
102 coercion R_to_fun : ∀r:R. ℕ → Q ≝ r on _r:R to ?.
104 (* Adding two real numbers just requires pointwise addition of the
105 approximations. The proof that the resulting sequence is Cauchy is the standard
106 one: to obtain an approximation up to eps it is necessary to approximate both
107 summands up to eps/2. The proof that the function is well defined w.r.t. the
108 omitted equivalence relation is also omitted. *)
109 definition Rplus_: R_ → R_ → R_ ≝
110 λr1,r2. mk_R_ (λn. r1 n + r2 n) ….
112 cases (isCauchy r1 (Qhalve eps)) #n1 #H1
113 cases (isCauchy r2 (Qhalve eps)) #n2 #H2
114 %{(max n1 n2)} #i #j #K1 #K2 @(transitive_Qleq … (Qdist_Qplus …))
115 <(Qplus_Qhalve_Qhalve eps) @Qleq_Qplus [@H1 @le_maxl | @H2 @le_maxr]
119 (* We lift Rplus_ to a morphism by proving that it respects the equality for
120 real numbers. We closely follow the usual mathematical proof: to compute the
121 sum up to epsilon it is sufficient to fetch the arguments at precision
123 definition Rplus: R ⤞ R ⤞ R ≝
124 mk_bin_morphism … Rplus_ ….
125 #x1 #x2 #y1 #y2 #Hx #Hy #eps
126 cases (Hx (Qhalve eps)) #i -Hx #Hx
127 cases (Hy (Qhalve eps)) #j -Hy #Hy
128 %{(max i j)} #l #Hmax <(Qplus_Qhalve_Qhalve eps) @transitive_Qleq
129 [3: @(Qleq_Qplus … (Hx l …) (Hy l …)) /2 by le_maxl,le_maxr/
133 (* Example 3: traces of a program *)
134 (* Let us introduce a very simple programming language whose instructions
135 can test and set a single implicit variable. *)
136 inductive instr: Type[0] ≝
137 p_set: ℕ → instr (* sets the variable to a constant *)
138 | p_incr: instr (* increments the variable *)
139 | p_while: list instr → instr. (* repeats until the variable is 0 *)
141 (* The status of a program as the values of the variable and the list of
142 instructions to be executed. *)
143 definition state ≝ ℕ × (list instr).
145 (* The transition function from a state to the next one. *)
146 inductive next: state → state → Prop ≝
147 n_set: ∀n,k,o. next 〈o,(p_set n)::k〉 〈n,k〉
148 | n_incr: ∀k,o. next 〈o,p_incr::k〉 〈S o,k〉
149 | n_while_exit: ∀b,k. next 〈0,(p_while b)::k〉 〈0,k〉
150 | n_while_loop: ∀b,k,o. next 〈S o,(p_while b)::k〉 〈S o,b@(p_while b)::k〉.
152 (* A diverging trace is a sequence of states such that the n+1-th state is
153 obtained executing one step from the n-th state *)
154 record div_trace: Type[0] ≝
156 ; div_well_formed: ∀n. next (div_tr n) (div_tr (S n))
159 (* The previous definition of trace is not very computable: we cannot write
160 a program that given an initial state returns its trace. To write that function,
161 we first write a computable version of next, called step. *)
162 definition step: state → option state ≝
173 | S p ⇒ 〈S p,b@(p_while b)::k〉 ]]].
175 theorem step_next: ∀o,n. step o = Some … n → next o n.
176 * #o * [ #n normalize #abs destruct ]
180 | #b #tl * #n' #tl' cases o normalize [2: #r]]
184 theorem next_step: ∀o,n. next o n → step o = Some … n.
185 * #o #k * #n #k' #H inversion H normalize
192 (* Termination is the archetipal undecidable problem. Therefore there is no
193 function that given an initial state returns the diverging trace if the program
194 diverges or fails in case of error. The best we can do is to give an alternative
195 definition of trace that captures both diverging and converging computations. *)
196 record trace: Type[0] ≝
197 { tr: seq (option state)
198 ; well_formed: ∀n,s. tr n = Some … s → tr (S n) = step s
201 (* The trace is diverging if every state is not final. *)
202 definition diverging: trace → Prop ≝
203 λt. ∀n. tr t n ≠ None ?.
205 (* The two definitions of diverging traces are equivalent. *)
206 theorem div_trace_to_diverging_trace:
207 ∀d: div_trace. ∃t: trace. diverging t ∧ tr t 0 = Some … (div_tr d 0).
208 #d %{(mk_trace (λn.Some ? (div_tr d n)) …)}
209 [2: % // #n % #abs destruct
210 | #n #s #EQ destruct lapply (div_well_formed d n) /2 by div_well_formed, next_step/ ]
213 theorem diverging_trace_to_div_trace:
214 ∀t: trace. diverging t → ∃d: div_trace. tr t 0 = Some … (div_tr d 0).
216 [ % [ #n lapply (H n) -H cases (tr t n) [ * #abs cases (abs …) // ] #s #_ @s
217 | #n lapply (well_formed t n)
218 lapply (H n) cases (tr t n) normalize [ * #abs cases (abs …) // ]
219 * #o #k #_ lapply (H (S n)) -H
220 cases (tr t (S n)) normalize
221 [ * #abs cases (abs …) // ] * #o' #k' #_ #EQ lapply (EQ … (refl …)) -EQ
222 normalize cases k normalize [ #abs destruct ] #hd #tl #EQ destruct -EQ
224 | lapply (H O) -H cases (tr t O) [ * #abs cases (abs …) // ] // ]
227 (* However, given an initial state we can always compute a trace.
228 Note that every time the n-th element of the trace is accessed, all the
229 elements in position ≤ n are computed too. *)
230 let rec compute_trace_seq (s:state) (n:nat) on n : option state ≝
234 match compute_trace_seq s m with
236 | Some o ⇒ step o ]].
238 definition compute_trace: ∀s:state. Σt:trace. tr t 0 = Some … s.
240 [ %{(compute_trace_seq s)}
241 #n #o elim n [ whd in ⊢ (??%? → ??%?); #EQ destruct // ]
242 -n #n #_ #H whd in ; whd in ⊢ (??%?); >H //
246 (*** Infinite data types as coinductive types ***)
248 (* All the previous examples were handled very easily via sequences
249 declared as functions. A few critics can be made to this approach though:
250 1. the sequence allows for random access. In many situations, however, the
251 elements of the sequence are meant to be read one after the other, in
252 increasing order of their position. Moreover, the elements are meant to be
253 consumed (read) linearly, i.e. just once. This suggests a more efficient
254 implementation where sequences are implemented with state machines that
255 emit the next value and enter a new state every time they are read. Indeed,
256 some programming languages like OCaml differentiate (possibly infinite)
257 lists, that are immutable, from mutable streams whose only access operation
258 yields the head and turns the stream into its tail. Data streams read from
259 the network are a typical example of streams: the previously read values are
260 not automatically memoized and are lost if not saved when read. Files on
261 disk are also usually read via streams to avoid keeping all of them in main
262 memory. Another typical case where streams are used is that of diverging
263 computations: in place of generating at once an infinite sequence of values,
264 a function is turned into a stream and asking the next element of the stream
265 runs one more iteration of the function to produce the next output (often
267 2. if a sequence computes the n-th entry by recursively calling itself on every
268 entry less than n, accessing the n-th entry requires re-computation of all
269 entries in position ≤ n, which is very inefficient.
270 3. by representing an infinite object as a collection of approximations, the
271 structure of the object is lost. This was not evident in the previous
272 examples because in all cases the intrinsic structure of the datatype was
273 just that of being ordered and sequences capture the order well. Imagine,
274 however, that we want to represent an infinite binary tree of elements of
275 type A with the previous technique. We need to associate to each position
276 in the infinite tree an element of type A. A position in the tree is itself
277 a path from the root to the node of interest. Therefore the infinite tree
278 is represented as the function (ℕ → 𝔹) → A where 𝔹 are the booleans and the
279 tree structure is already less clear. Suppose now that the binary tree may
280 not be full, i.e. some nodes can have less than two children. Representing
281 such a tree is definitely harder. We may for example use the data type
282 (N → 𝔹) → option A where None would be associated to the position
283 b1 ... bn if a node in such position does not exist. However, we would need
284 to add the invariant that if b1 ... bn is undefined (i.e. assigned to None),
285 so are all suffixes b1 ... bn b_{n+1} ... b_{n+j}.
287 The previous observations suggest the need for primitive infinite datatypes
288 in the language, usually called coinductive types or codata. Matita allows
289 to declare coinductive type with the same syntax used for inductive types,
290 just replacing the keywork inductive with coinductive. Semantically, the two
291 declarations differ because a coinductive type also contains infinite
292 inhabitants that respect the typechecking rules.
295 (* Example 1 revisited: non terminated streams of elements of type A *)
296 coinductive streamseq (A: Type[0]) : Type[0] ≝
297 sscons: A → streamseq A → streamseq A.
299 (* Coinductive types can be inhabited by infinite data using coinductive
300 definitions, introduced by the keyword let corec. The syntax of let corec
301 definitions is the same of let rec definitions, but for the declarations
302 of the recursive argument. While let rec definitions are recursive definition
303 that are strictly decreasing on the recursive argument, let corec definitions
304 are productive recursive definitions. A recursive definition is productive
305 if, when fully applied to its arguments, it reduces in a finite amount of time
306 to a constructor of a coinductive type.
308 Let's see some simple examples of coinductive definitions of corecursive
311 (* The streamseq 0 0 0 ...
312 Note that all_zeros is not a function and does not have any argument.
313 The definition is clearly productive because it immediately reduces to
314 the constructor sscons. *)
315 let corec all_zeros: streamseq nat ≝ sscons nat 0 all_zeros.
317 (* The streamseq n (n+1) (n+2) ...
318 The definition behaves like an automaton with state n. When the
319 streamseq is pattern matched, the current value n is returned as head
320 of the streamseq and the tail of the streamseq is the automaton with
321 state (S n). Therefore obtaining the n-th tail of the stream requires O(n)
322 operation, but every further access to its value only costs O(1). Moreover,
323 in the future the implementation of Matita could automatically memoize
324 streams so that obtaining the n-th element would also be an O(1) operation
325 if the same element was previously accessed at least once. This is what
326 is currently done in the implementation of the Coq system for example.
328 let corec from_n (n:ℕ) : streamseq nat ≝ sscons … n (from_n (S n)).
330 (* In order to retrieve the n-th element from a streamseq we can write a
331 function recursive over n. *)
332 let rec streamseq_nth (A: Type[0]) (s: streamseq A) (n:ℕ) on n : A ≝
333 match s with [ sscons hd tl ⇒
334 match n with [ O ⇒ hd | S m ⇒ streamseq_nth … tl m ]].
336 (* Any sequence can be turned into the equivalent streamseq and the other
338 let corec streamseq_of_seq (A: Type[0]) (s: seq A) (n:ℕ) : streamseq A ≝
339 sscons … (s n) (streamseq_of_seq A s (S n)).
341 lemma streamseq_of_seq_ok:
342 ∀A:Type[0]. ∀s: seq A. ∀m,n.
343 streamseq_nth A (streamseq_of_seq … s n) m = s (m+n).
344 #A #s #m elim m normalize //
347 definition seq_of_streamseq: ∀A:Type[0]. streamseq A → seq A ≝ streamseq_nth.
349 lemma seq_of_streamseq_ok:
350 ∀A:Type[0]. ∀s: streamseq A. ∀n. seq_of_streamseq … s n = streamseq_nth … s n.
354 (* Example 2 revisited: Real numbers as Cauchy sequences and their addition.
355 We closely follow example 2 replacing sequences with streamseqs.
358 definition Qstreamseq: Type[0] ≝ streamseq Q.
360 definition Qstreamseq_nth ≝ streamseq_nth Q.
362 (* The Cauchy property *)
363 definition Cauchy': Qstreamseq → Prop ≝
364 λR:Qstreamseq. ∀eps. ∃n. ∀i,j. n ≤ i → n ≤ j → Qdist (Qstreamseq_nth R i) (Qstreamseq_nth R j) ≤ eps.
366 (* A real number is an equivalence class of Cauchy sequences. In type theory
367 we can define R' as a setoid whose carrier R_' is the type of Cauchy sequences. *)
368 record R_': Type[0] ≝
370 ; isCauchy': Cauchy' r'
373 (* Two sequences are equal when for every epsilon they are eventually pointwise
374 closer than epsilon. The script is exactly the same already used when we
375 defined R using functions. *)
376 definition R' : setoid ≝
378 (mk_equivalence_relation …
379 (λr1,r2:R_'. ∀eps. ∃n. ∀i. n ≤ i →
380 Qdist (Qstreamseq_nth (r' r1) i) (Qstreamseq_nth (r' r2) i) ≤ eps) …).
381 [ (* transitivity *) #x #y #z #H1 #H2 #eps cases (H1 (Qhalve eps)) #i -H1 #H1
382 cases (H2 (Qhalve eps)) #j -H2 #H2 %{(max i j)} #l #Hmax
383 <(Qplus_Qhalve_Qhalve eps) @transitive_Qleq
384 [3: @(Qleq_Qplus … (H1 l …) (H2 l …)) /2 by le_maxl,le_maxr/
386 | (* symmetry *) #x #y #H #eps cases (H eps) /3 by ex_intro/
387 | (* reflexivity *) #x #eps cases (isCauchy' x eps) /3 by ex_intro/ ]
390 unification hint 0 ≔ ;
392 (* ---------------------------------------- *) ⊢
395 (* The following coercion is used to write r n to extract the n-th approximation
396 from the real number r *)
397 coercion R_to_fun' : ∀r:R'. ℕ → Q ≝ (λr. Qstreamseq_nth (r' r)) on _r:R' to ?.
399 (* Pointwise addition over Qstreamseq defined by corecursion.
400 The definition is productive because, when Rplus_streamseq is applied to
401 two closed values of type Qstreamseq, it will reduce to sscons. *)
402 let corec Rplus_streamseq (x:Qstreamseq) (y:Qstreamseq) : Qstreamseq ≝
403 match x with [ sscons hdx tlx ⇒
404 match y with [ sscons hdy tly ⇒
405 sscons … (hdx + hdy) (Rplus_streamseq tlx tly) ]].
407 (* The following lemma was for free using sequences. In the case of streamseqs
408 it must be proved by induction over the index because Qstreamseq_nth is defined by
409 recursion over the index. *)
410 lemma Qstreamseq_nth_Rplus_streamseq:
412 Qstreamseq_nth (Rplus_streamseq x y) i = Qstreamseq_nth x i + Qstreamseq_nth y i.
413 #i elim i [2: #j #IH] * #xhd #xtl * #yhd #ytl // normalize @IH
416 (* The proof that the resulting sequence is Cauchy is exactly the same we
417 used for sequences, up to two applications of the previous lemma. *)
418 definition Rplus_': R_' → R_' → R_' ≝
419 λr1,r2. mk_R_' (Rplus_streamseq (r' r1) (r' r2)) ….
421 cases (isCauchy' r1 (Qhalve eps)) #n1 #H1
422 cases (isCauchy' r2 (Qhalve eps)) #n2 #H2
423 %{(max n1 n2)} #i #j #K1 #K2
424 >Qstreamseq_nth_Rplus_streamseq >Qstreamseq_nth_Rplus_streamseq
425 @(transitive_Qleq … (Qdist_Qplus …))
426 <(Qplus_Qhalve_Qhalve eps) @Qleq_Qplus [@H1 @le_maxl | @H2 @le_maxr]
430 (* We lift Rplus_' to a morphism by proving that it respects the equality for
431 real numbers. The script is exactly the same we used to lift Rplus_, but for
432 two applications of Qstreamseq_nth_Rplus_streamseq. *)
433 definition Rplus': R' ⤞ R' ⤞ R' ≝
434 mk_bin_morphism … Rplus_' ….
435 #x1 #x2 #y1 #y2 #Hx #Hy #eps
436 cases (Hx (Qhalve eps)) #i -Hx #Hx
437 cases (Hy (Qhalve eps)) #j -Hy #Hy
438 %{(max i j)} #l #Hmax <(Qplus_Qhalve_Qhalve eps) @transitive_Qleq
439 [3: @(Qleq_Qplus … (Hx l …) (Hy l …)) /2 by le_maxl,le_maxr/
440 |2: >Qstreamseq_nth_Rplus_streamseq >Qstreamseq_nth_Rplus_streamseq // ]
443 (***** Intermezzo: the dynamics of coinductive data ********)
445 (* Let corec definitions, like let rec definitions, are a form of recursive
446 definition where the definiens occurs in the definiendum. Matita compares
447 types up to reduction and reduction always allows the expansion of non recursive
448 definitions. If it also allowed the expansion of recursive definitions, reduction
449 could diverge and type checking would become undecidable. For example,
450 we defined all_zeros as "sscons … 0 all_zeros". If the system expanded all
451 occurrences of all_zeros, it would expand it forever to
452 "sscons … 0 (sscons … 0 (sscons … 0 …))".
454 In order to avoid divergence, recursive definitions are only expanded when a
455 certain condition is met. In the case of a let-rec defined function f that is
456 recursive on its n-th argument, f is only expanded when it occurs in an
457 application (f t1 ... tn ...) and tn is (the application of) a constructor.
458 Termination is guaranteed by the combination of this restriction and f being
459 guarded by destructors: the application (f t1 ... tn ...) can reduce to a term
460 that contains another application (f t1' ... tn' ...) but the size of tn'
461 (roughly the number of nested constructors) will be smaller than the size of tn
462 eventually leading to termination.
464 Dual restrictions are put on let corec definitions. If f is a let-rec defined
465 term, f is only expanded when it occurs in the term "match f t1 ... tn with ...".
466 To better see the duality, that is not syntactically perfect, note that: in
467 the recursive case f destructs terms and is expanded only when applied to a
468 constructor; in the co-recursive case f constructs terms and is expanded only
469 when it becomes argument of the match destructor. Termination is guaranteed
470 by the combination of this restriction and f being productive: the term
471 "match f t1 ... tn ... with" will reduce in a finite amount of time to
472 a match applied to a constructor, whose reduction can expose another application
473 of f, but not another "match f t1' .. tn' ... width". Therefore, since no
474 new matches around f can be created by reduction, the number of
475 destructors that surrounds the application of f decreases at every step,
476 eventually leading to termination.
478 Even if a coinductively defined f does not reduce in every context to its
479 definiendum, it is possible to prove that the definiens is equal to its
480 definiendum. The trick is to prove first an eta-expansion lemma for the
481 inductive type that states that an inhabitant of the inductive type is
482 equal to the one obtained destructing and rebuilding it via a match. The proof
483 is simply by cases over the inhabitant. Let's see an example. *)
486 ∀A:Type[0]. ∀s: streamseq A.
487 match s with [ sscons hd tl ⇒ sscons … hd tl ] = s.
491 (* In order to prove now that the definiens of all_zeros is equal to its
492 definiendum, it suffices to rewrite it using the eta_streamseq lemma in order
493 to insert around the definiens the match destructor that triggers its
495 lemma all_zeros_expansion: all_zeros = sscons … 0 all_zeros.
496 <(eta_streamseq ? all_zeros) in ⊢ (??%?); //
499 (* Expansions lemmas as the one just presented are almost always required to
500 progress in non trivial proofs, as we will see in the next example. Instead
501 the equivalent expansions lemmas for let-rec definitions are rarely required.
504 (* Example 3 revisited: traces of a program. *)
506 (* A diverging trace is a streamseq of states such that the n+1-th state is
507 obtained executing one step from the n-th state. The definition of
508 div_well_formed' is the same we already used for sequences, but on
511 definition div_well_formed' : streamseq state → Prop ≝
513 ∀n. next (streamseq_nth … s n) (streamseq_nth … s (S n)).
515 record div_trace': Type[0] ≝
516 { div_tr':> streamseq state
517 ; div_well_formed'': div_well_formed' div_tr'
520 (* The well formedness predicate over streamseq can also be redefined using as a
521 coinductive predicate. A streamseq of states is well formed if the second
522 element is the next of the first and the stream without the first element
523 is recursively well formed. *)
524 coinductive div_well_formed_co: streamseq state → Prop ≝
526 ∀hd1:state. ∀hd2:state. ∀tl:streamseq state.
527 next hd1 hd2 → div_well_formed_co (sscons … hd2 tl) →
528 div_well_formed_co (sscons … hd1 (sscons … hd2 tl)).
530 (* Note that Matita automatically proves the inversion principles for every
531 coinductive type, but no general coinduction principle. That means that
532 the elim tactic does not work when applied to a coinductive type. Inversion
533 and cases are the only ways to eliminate a coinductive hypothesis. *)
535 (* A proof of div_well_formed cannot be built stacking a finite
536 number of constructors. The type can only be inhabited by a coinductive
537 definition. As an example, we show the equivalence between the two
538 definitions of well formedness for streamseqs. *)
540 (* A div_well_formed' stream is also div_well_formed_co. We write the proof
541 term explicitly, omitting the subterms that prove "next hd1 hd2" and
542 "div_well_formed' (sscond … hd2 tl)". Therefore we will obtain two proof
543 obligations. The given proof term is productive: if we apply it to a closed
544 term of type streamseq state and a proof that it is well formed, the two
545 matches in head position will reduce and the lambda-abstraction will be
546 consumed, exposing the is_next constructor. *)
548 let corec div_well_formed_to_div_well_formed_co
549 (s: streamseq state): div_well_formed' s → div_well_formed_co s ≝
550 match s with [ sscons hd1 tl1 ⇒
551 match tl1 with [ sscons hd2 tl ⇒
552 λH: div_well_formed' (sscons … hd1 (sscons … hd2 tl)).
553 is_next … (div_well_formed_to_div_well_formed_co (sscons … hd2 tl) …) ]].
554 [ (* First proof obligation: next hd1 hd2 *) @(H 0)
555 | (* Second proof obligation: div_well_formed' (sscons … hd2 tl) *) @(λn. H (S n)) ]
558 (* A div_well_formed_co stream is also div_well_formed'. This time the proof is
559 by induction over the index and inversion over the div_well_formed_co
561 theorem div_well_formed_co_to_div_well_formed:
562 ∀s: streamseq state. div_well_formed_co s → div_well_formed' s.
563 #s #H #n lapply H -H lapply s -s elim n [2: #m #IH]
564 * #hd1 * #hd2 #tl normalize #H inversion H #hd1' #hd2' #tl' #Hnext #Hwf
568 (* Like for sequences, because of undecidability of termination there is no
569 function that given an initial state returns the diverging trace if the program
570 diverges or fails in case of error. We need a new data type to represent a
571 possibly infinite, possibly terminated stream of elements. Such data type is
572 usually called stream and can be defined elegantly as a coinductive type. *)
573 coinductive stream (A: Type[0]) : Type[0] ≝
575 | scons: A → stream A → stream A.
577 (* The following lemma will be used to unblock blocked reductions, as
578 previously explained for eta_streamseq. *)
580 ∀A:Type[0]. ∀s: stream A.
581 match s with [ snil ⇒ snil … | scons hd tl ⇒ scons … hd tl ] = s.
585 (* The definition of trace based on streams is more natural than that based
586 on sequences of optional states because there is no need of the invariant that
587 a None state is followed only by None states (to represent a terminated
588 sequence). Indeed, this is the first example where working with coinductive
589 types seems to yield advantages in terms of simplicity of the formalization.
590 However, in order to give the definition we first need to coinductively define
591 the well_formedness predicate, whose definition is more complex than the
593 coinductive well_formed': stream state → Prop ≝
594 wf_snil: ∀s. step s = None … → well_formed' (scons … s (snil …))
597 step hd1 = Some … hd2 →
598 well_formed' (scons … hd2 tl) →
599 well_formed' (scons … hd1 (scons … hd2 tl)).
601 (* Note: we could have equivalently defined well_formed' avoiding coinduction
602 by defining a recursive function to retrieve the n-th element of the stream,
603 if any. From now on we will stick to coinductive predicates only to show more
604 examples of usage of coinduction. In a formalization, however, it is always
605 better to explore several alternatives and see which ones work best for the
608 record trace': Type[0] ≝
610 ; well_formed'': well_formed' tr'
613 (* The trace is diverging if every state is not final. Again, we show here
614 a coinductive definition. *)
615 coinductive diverging': stream state → Prop ≝
616 mk_diverging': ∀hd,tl. diverging' tl → diverging' (scons … hd tl).
618 (* The two coinductive definitions of diverging traces are equivalent.
619 To state the two results we first need a function to retrieve the head
620 from traces and diverging traces. *)
621 definition head_of_streamseq: ∀A:Type[0]. streamseq A → A ≝
622 λA,s. match s with [ sscons hd _ ⇒ hd ].
624 definition head_of_stream: ∀A:Type[0]. stream A → option A ≝
625 λA,s. match s with [ snil ⇒ None … | scons hd _ ⇒ Some … hd ].
627 (* A streamseq can be extracted from a diverging stream using corecursion. *)
628 let corec mk_diverging_trace_to_div_trace'
629 (s: stream state) : diverging' s → streamseq state ≝
630 match s return λs. diverging' s → streamseq state
632 [ snil ⇒ λabs: diverging' (snil …). ?
633 | scons hd tl ⇒ λH. sscons ? hd (mk_diverging_trace_to_div_trace' tl …) ].
634 [ cases (?:False) inversion abs #hd #tl #_ #abs' destruct
635 | inversion H #hd' #tl' #K #eq destruct @K ]
638 (* An expansion lemma will be needed soon. *)
639 lemma mk_diverging_trace_to_div_trace_expansion:
641 mk_diverging_trace_to_div_trace' (scons state hd tl) H =
642 sscons … hd (mk_diverging_trace_to_div_trace' tl K).
643 #hd #tl #H cases (eta_streamseq … (mk_diverging_trace_to_div_trace' ??)) /2/
646 (* CSC: BUG CHE APPARE NEL PROSSIMO LEMMA AL MOMENTO DELLA QED. IL DEMONE
647 SERVE PER DEBUGGARE *)
650 (* To complete the proof we need a final lemma: streamseqs extracted from
651 well formed diverging streams are well formed too. The definition is mostly
652 interactive because we need to use the expansion lemma above to rewrite
653 the type of the scons branch. Otherwise, Matita rejects the term as ill-typed *)
654 let corec div_well_formed_co_mk_diverging_trace_to_div_trace (t : stream state) :
656 ∀H:diverging' t. div_well_formed_co (mk_diverging_trace_to_div_trace' t H) ≝
657 match t return λt. well_formed' t → diverging' t → ?
660 | scons hd tl ⇒ λW.λH. ? ].
661 [ cases (?:False) inversion abs #hd #tl #_ #eq destruct
662 | cases (mk_diverging_trace_to_div_trace_expansion … H) #H' #eq
663 lapply (sym_eq ??? … eq) #Req cases Req -Req -eq -H
665 [ #_ #abs cases (?:False) inversion abs #hd #tl #_ #eq destruct
667 cases (mk_diverging_trace_to_div_trace_expansion … H) #K #eq >eq
668 inversion W [ #s #_ #abs destruct ]
669 #hd1' #hd2' #tl' #eq1 #wf #eq2 lapply eq1
670 cut (hd=hd1' ∧ hd2 = hd2' ∧ tl=tl') [ cases daemon (* destruct /3/ *) ]
674 | cases daemon (*<eq @div_well_formed_co_mk_diverging_trace_to_div_trace cases daemon (*//*) ]]]
677 theorem diverging_trace_to_div_trace':
678 ∀t: trace'. diverging' t → ∃d: div_trace'.
679 head_of_stream … t = Some … (head_of_streamseq … d).
681 [ %{(mk_diverging_trace_to_div_trace' … H)}
682 @div_well_formed_co_to_div_well_formed
683 @div_well_formed_co_mk_diverging_trace_to_div_trace
685 | cases t in H; * normalize // #abs cases (?:False) inversion abs
686 [ #s #_ #eq destruct | #hd1 #hd2 #tl #_ #_ #eq destruct ]]
689 (* A stream can be extracted from a streamseq using corecursion. *)
690 let corec stream_of_streamseq (A: Type[0]) (s: streamseq A) : stream A ≝
691 match s with [ sscons hd tl ⇒ scons … hd (stream_of_streamseq … tl) ].
693 (* We need again an expansion lemma to typecheck the proof that the resulting
694 stream is divergent. *)
695 lemma stream_of_streamseq_expansion:
697 stream_of_streamseq A (sscons … hd tl) = scons … hd (stream_of_streamseq … tl).
698 #A #hd #tl <(eta_stream … (stream_of_streamseq …)) //
701 (* The proof that the resulting stream is diverging also need corecursion.*)
702 let corec diverging_stream_of_streamseq (s: streamseq state) :
703 diverging' (stream_of_streamseq … s) ≝
704 match s return λs. diverging' (stream_of_streamseq … s)
705 with [ sscons hd tl ⇒ ? ].
706 >stream_of_streamseq_expansion % //
709 let corec well_formed_stream_of_streamseq (d: streamseq state) :
710 div_well_formed' … d → well_formed' (stream_of_streamseq state d) ≝
712 return λd. div_well_formed' d → ?
713 with [ sscons hd1 tl ⇒
714 match tl return λtl. div_well_formed' (sscons … tl) → ?
715 with [ sscons hd2 tl2 ⇒ λH.? ]].
716 >stream_of_streamseq_expansion >stream_of_streamseq_expansion %2
717 [2: <stream_of_streamseq_expansion @well_formed_stream_of_streamseq
719 | @next_step @(H O) ]
722 theorem div_trace_to_diverging_trace':
723 ∀d: div_trace'. ∃t: trace'. diverging' t ∧
724 head_of_stream … t = Some … (head_of_streamseq … d).
725 #d %{(mk_trace' (stream_of_streamseq … d) …)}
726 [ /2/ | % // cases d * // ]
729 (******** How to compare coinductive types *********)
731 (* Inhabitants of inductive types are compared using Leibniz's equality that,
732 on closed terms, coincides with convertibility. Two closed inductive terms
733 are equal iff they reduce to the same normal form.
735 Because of the lazy evaluation of coinductively defined terms, conversion
736 does not behave as expected on inhabitants of coinductive types. For example,
737 the following two definitions, both in normal form, produce the same
738 streamseq 0,1,0,1,… but are not equal because the normal forms are
739 syntactically different. *)
741 let corec zero_one_streamseq1: streamseq ℕ ≝
742 sscons … 0 (sscons … 1 zero_one_streamseq1).
744 let corec one_zero_streamseq: streamseq ℕ ≝
745 sscons … 1 (sscons … 0 one_zero_streamseq).
747 definition zero_one_streamseq2: streamseq ℕ ≝ sscons … 0 one_zero_streamseq.
749 (* In place of Leibniz equality, the right notion to compare coinductive terms
750 is bisimulation. Two terms t1,t2 are bisimilar if every observation on t1 can
751 be performed on t2 as well, and the observed subterms are co-recursively
752 bisimilar. In practice two coinductive terms are bisimilar if they are the same
753 constructor applied to bisimilar coinductive subterms.
755 We define bisimilarity for streamseqs using a coinductive predicate. *)
756 coinductive ss_bisimilar (A: Type[0]) : streamseq A → streamseq A → Prop ≝
759 ss_bisimilar … tl1 tl2 →
760 ss_bisimilar … (sscons … x tl1) (sscons … x tl2).
762 (* The two streams above can be proved to be bisimilar. By using the
763 eta_streamseq trick twice, we expand both definitions once so to obtain the
764 new proof obligation 0,1,zero_one_streamseq1 = 0,1,zero_one_streamseq2.
765 Then we apply twice the sscons_bisim constructor to consume the leading 0 and 1,
766 finding again the original goal. Finally we conclude with the coinductive
767 hypothesis. The proof is productive because, after the two rewriting, it
768 reduces to two applications of the sscons_bisim constructor, immediately
769 followed by the recursive call. *)
770 let corec zero_one_streamseq_eq: ss_bisimilar … zero_one_streamseq1 zero_one_streamseq2 ≝
772 <(eta_streamseq … zero_one_streamseq1) normalize
773 <(eta_streamseq … one_zero_streamseq) normalize
774 % % @zero_one_streamseq_eq
777 (* We can finally turn streamseqs into a setoid and lift every operation on
778 streamseqs to morphisms. We first prove reflexivity, symmetry and transitivity
779 of bisimulation because each proof must be given using let corec. *)
780 let corec reflexive_ss_bisimilar (A: Type[0]): reflexive … (ss_bisimilar A) ≝ ?.
781 * #hd #tl % @reflexive_ss_bisimilar
784 let corec symmetric_ss_bisimilar (A: Type[0]): symmetric … (ss_bisimilar A) ≝ ?.
785 * #hd1 #tl1 * #hd2 #tl2 * #hd #tl1' #tl2' #H % @symmetric_ss_bisimilar @H
788 (* In the proof of transitivity we face a technical problem. After inverting one
789 of the bisimilarity hypothesis we are left with an equation of the form
790 sscons … hd1 tl1 = sscons … hd2 tl2 and we need to extract hd1 = hd2 and
791 tl1 = tl2 to conclude the proof. The destruct tactic does exactly this, but
792 it produces a proof term that is not recognized to be productive by Matita.
793 The trick is to cut the two equalities and to use destruct to prove them.
794 In this way the destruct tactic is used only in a side proof inside which there
795 is no recursive call, and the whole proof term is recognized to be productive. *)
796 let corec transitive_ss_bisimilar (A: Type[0]): transitive … (ss_bisimilar A) ≝ ?.
797 * #hd1 #tl1 * #hd2 #tl2 * #hd3 #tl3
798 * #hd #tl1' #tl2' #H1
799 #H inversion H #hd' #tl1'' #tl2'' -H #H #EQ1 #_
800 cut (hd=hd' ∧ tl1''=tl2') [ destruct /2/ ] ** #EQ % @transitive_ss_bisimilar //
803 definition STREAMSEQ: Type[0] → setoid ≝
805 mk_setoid (streamseq A)
806 (mk_equivalence_relation … (ss_bisimilar A) …).
810 unification hint 0 ≔ A:Type[0];
812 (* ---------------------------------------- *) ⊢
813 streamseq A ≡ carrier X.
815 (* As an example, we lift streamseq_nth to a morphism. *)
816 definition STREAMSEQ_NTH: ∀A: Type[0]. STREAMSEQ A ⤞ (LEIBNIZ ℕ) ⤞ (LEIBNIZ A) ≝
817 λA. mk_bin_morphism ??? (streamseq_nth A) ….
818 #x1 #x2 #n1 #n2 #bisim * lapply bisim -bisim lapply x2 -x2 lapply x1 -x1 -n2 elim n1
819 [ #x1 #x2 #H inversion H //
820 | #m #IH #x1 #x2 #H inversion H #hd #tl1 #tl2 #bisim #EQ1 #EQ2 destruct @IH //]
823 (* Working with setoids introduce technical complications that one would rather
824 avoid. The alternative encoding of streams via sequences doen not solve the
825 issue. Sequences are functions and, to capture the mathematical meaning of
826 equality for sequences, functions need to be compare using functional
827 extensionality. Two functions are extensionally equal when they are pointiwse
828 equal, i.e. they map equal inputs to the same outputs. Intensional equality of
829 functions, instead, just compares the two codes.
831 For example, the next two enumerations 0,1,2,… are extensionally, but not
832 intensionally equal. *)
833 definition enum_N1: seq ℕ ≝ λi. i.
834 definition enum_N2: seq ℕ ≝ λi. i+0.
836 (* Functional extensionality is defined in the standard library of Matita as
838 definition exteqF: ∀A,B:Type[0].∀f,g:A→B.Prop ≝
839 λA,B.λf,g.∀a.f a = g a.
841 (* The proof that the two sequences above are extensionally equal is trivial. *)
842 lemma enum_N_equal: exteqF … enum_N1 enum_N2.
846 (* Like for bisimulation, to work sistematically with functional extensionality
847 we need to turn the type of sequences into a setoid. The two alternatives are
848 actually equivalent with respect to this issue. *)
850 (******* Generic construction principles *******)
852 (* When an inductive type is defined Matita automatically
853 proves a generic elimination principle using a recursive definition. Later the
854 user can apply the generic eliminator in the middle of a proof, without the
855 need to make a lemma and prove it using a let rec.
857 For example, the principle generated for a list is the following:
859 ∀A:Type[0]. ∀P: A → Prop.
860 P (nil …) → (∀hd:A. ∀tl: list A. P tl → P (hd::tl)) →
863 Here the predicate P is dependent, i.e. it is not just a proposition, but
864 a proposition indexed by the list we are analysing.
866 The corresponding non dependent principle is:
868 ∀A:Type[0]. ∀P: Prop.
869 P → (∀hd:A. ∀tl: list A. P → P) →
872 that we can rewrite up to currification as
874 ∀A:Type[0]. ∀P: Prop.
875 ((unit + A × list A × P) → P) →
878 and, turning the recursor into an iterator, to the simpler form
880 ∀A:Type[0]. ∀P: Prop.
881 ((unit + A × P) → P) →
884 Thinking of a list as the least fixpoint of the equation
886 list A = unit + A × list A
888 The above principle says that we can prove P as soon as we can prove P for
889 every possible shape of the list, replacing sublists (list A) with the
890 result P of recursive calls of the iterator.
892 Streams are the greatest fixpoint of the same equation
894 stream A = unit + A × stream A
896 Dualizing the idea of a generic destructor for list, we obtain the type of the
897 generic constructor for streams:
899 ∀A:Type[0]. ∀P: Prop.
900 (P → (unit + A × P)) →
903 The above principle says that we can build a stream from a proof of P as soon
904 as we can observe from P what shape the stream has and what P will be used to
905 generate the reamining of the stream. The principle is indeed easily provable
908 let corec stream_coind (A: Type[0]) (P: Prop) (H: P → Sum unit (A × P))
912 | inr cpl ⇒ let 〈hd,p'〉 ≝ cpl in scons A hd (stream_coind A P H p') ].
914 (* At the moment Matita does not automatically prove any generic construction
915 principle. One reason is that the type of the generated principles is less
916 informative than that for elimination principles. The problem is that in the
917 type theory of Matita it is not possible to generalize the above type to a
918 dependent version, to obtain the exact dual of the elimination principle.
919 In case of the elimination principle, the type P depends on the value of the
920 input, and the principle proves the dependent product ∀l: list A. P A.
921 In the case of the introduction principle, the output is P → list A.
922 To obtain a dependent version, we would need a dual of the dependent product
923 such that the type of the input is dependent on the value of the output: a
924 notion that is not even meaningful when the function is not injective. *)
926 (* AGGIUNGERE SPIEGAZIONE SU PRODUTTIVITA' *)
928 (* AGGIUNGERE ESEMPI DI SEMANTICA OPERAZIONE BIG STEP PER LA DIVERGENZA;
929 DI PROPRIETA' DI SAFETY;
930 DI TOPOLOGIE COINDUTTIVAMENTE GENERATE? *)
932 (* ################## COME SPIEGARLO QUI? ####################### *)