4 % \usepackage{amssymb,amsmath}
11 \definecolor{gray}{gray}{0.85}
12 %\newcommand{\logo}[3]{
13 %\parpic(0cm,0cm)(#2,#3)[l]{\includegraphics[width=#1]{whelp-bw}}
16 \newcommand{\AUTO}{\textsc{Auto}}
17 \newcommand{\COQ}{Coq}
18 \newcommand{\ELIM}{\textsc{Elim}}
19 \newcommand{\HELM}{Helm}
20 \newcommand{\HINT}{\textsc{Hint}}
21 \newcommand{\IN}{\ensuremath{\mathbb{N}}}
22 \newcommand{\INSTANCE}{\textsc{Instance}}
23 \newcommand{\IR}{\ensuremath{\mathbb{R}}}
24 \newcommand{\IZ}{\ensuremath{\mathbb{Z}}}
25 \newcommand{\LIBXSLT}{LibXSLT}
26 \newcommand{\LOCATE}{\textsc{Locate}}
27 \newcommand{\MATCH}{\textsc{Match}}
28 \newcommand{\MATITA}{Matita}
29 \newcommand{\METAHEADING}{Symbol & Position \\ \hline\hline}
30 \newcommand{\MOWGLI}{MoWGLI}
31 \newcommand{\NAT}{\ensuremath{\mathit{nat}}}
32 \newcommand{\NATIND}{\mathit{nat\_ind}}
33 \newcommand{\NUPRL}{NuPRL}
34 \newcommand{\OCAML}{OCaml}
35 \newcommand{\PROP}{\mathit{Prop}}
36 \newcommand{\REF}[3]{\ensuremath{\mathit{Ref}_{#1}(#2,#3)}}
37 \newcommand{\TEXMACRO}[1]{\texttt{\char92 #1}}
38 \newcommand{\UWOBO}{UWOBO}
39 \newcommand{\WHELP}{Whelp}
41 \definecolor{gray}{gray}{0.85} % 1 -> white; 0 -> black
42 \newcommand{\NT}[1]{\langle\mathit{#1}\rangle}
43 \newcommand{\URI}[1]{\texttt{#1}}
45 %{\end{SaveVerbatim}\setlength{\fboxrule}{.5mm}\setlength{\fboxsep}{2mm}%
46 \newenvironment{grafite}{\VerbatimEnvironment
47 \begin{SaveVerbatim}{boxtmp}}%
48 {\end{SaveVerbatim}\setlength{\fboxsep}{3mm}%
50 \fcolorbox{black}{gray}{\BUseVerbatim[boxwidth=0.9\linewidth]{boxtmp}}
54 \newenvironment{example}{\stepcounter{example}\emph{Example} \arabic{example}.}
56 \newcommand{\ASSIGNEDTO}[1]{\textbf{Assigned to:} #1}
57 \newcommand{\FILE}[1]{\texttt{#1}}
58 % \newcommand{\NOTE}[1]{\ifodd \arabic{page} \else \hspace{-2cm}\fi\ednote{#1}}
59 \newcommand{\NOTE}[1]{\ednote{x~\hspace{-10cm}y#1}{foo}}
60 \newcommand{\TODO}[1]{\textbf{TODO: #1}}
63 \newcommand{\sequent}[2]{
64 \savebox{\tmpxyz}[0.9\linewidth]{
65 \begin{minipage}{0.9\linewidth}
69 \end{minipage}}\setlength{\fboxsep}{3mm}%
71 \fcolorbox{black}{gray}{\usebox{\tmpxyz}}
74 \bibliographystyle{plain}
80 \title{The Matita proof assistant}
81 \runningtitle{The Matita proof assistant}
83 \author{Andrea Asperti, Claudio Sacerdoti Coen, Enrico Tassi
84 and Stefano Zacchiroli}
85 \runningauthor{Andrea Asperti, Claudio Sacerdoti Coen, Enrico Tassi
86 and Stefano Zacchiroli}
88 \institute{Department of Computer Science, University of Bologna\\
89 Mura Anteo Zamboni, 7 --- 40127 Bologna, ITALY\\
90 \email{$\{$asperti,sacerdot,tassi,zacchiro$\}$@cs.unibo.it}}
98 \keywords{parole, chiave}
102 \section{Introduction}
104 {\em Matita} is the proof assistant under development by the \HELM{} team
105 \cite{mkm-helm} at the University of Bologna, under the direction of
107 The origin of the system goes back to 1999. At the time we were mostly
108 interested to develop tools and techniques to enhance the accessibility
109 via web of formal libraries of mathematics. Due to its dimension, the
110 library of the \COQ{} proof assistant (of the order of 35'000 theorems)
111 was choosed as a privileged test bench for our work, although experiments
112 have been also conducted with other systems, and notably with \NUPRL{}.
113 The work, mostly performed in the framework of the recently concluded
114 European project IST-33562 \MOWGLI{}~\cite{pechino}, mainly consisted in the
117 \item exporting the information from the internal representation of
118 \COQ{} to a system and platform independent format. Since XML was at the
119 time an emerging standard, we naturally adopted this technology, fostering
120 a content-based architecture for future system, where the documents
121 of the library were the the main components around which everything else
123 \item developing indexing and searching techniques supporting semantic
124 queries to the library; these efforts gave birth to our \WHELP{}
125 search engine, described in~\cite{whelp};
126 \item developing languages and tools for a high-quality notational
127 rendering of mathematical information; in particular, we have been
128 active in the MathML Working group since 1999, and developed inside
129 \HELM{} a MathML-compliant widget for the GTK graphical environment
130 which can be integrated in any application.
132 The exportation issue, extensively discussed in \cite{exportation-module},
133 has several major implications worth to be discussed.
136 point concerns the kind of content information to be exported. In a
137 proof assistant like \COQ{}, proofs are represented in at least three clearly
138 distinguishable formats: \emph{scripts} (i.e. sequences of commands issued by the
139 user to the system during an interactive session of proof), \emph{proof objects}
140 (which is the low-level representation of proofs in the form of
141 lambda-terms readable to and checked by kernel) and \emph{proof-trees} (which
142 is a kind of intermediate representation, vaguely inspired by a sequent
143 like notation, that inherits most of the defects but essentially
144 none of the advantages of the previous representations).
145 Partially related to this problem, there is the
146 issue of the {\em granularity} of the library: scripts usually comprise
147 small developments with many definitions and theorems, while
148 proof objects correspond to individual mathematical items.
150 In our case, the choice of the content encoding was eventually dictated
151 by the methodological assumption of offering the information in a
152 stable and system-independent format. The language of scripts is too
153 oriented to \COQ, and it changes too rapidly to be of any interest
154 to third parties. On the other side, the language of proof objects
156 the logical framework (the Calculus of Inductive Constructions, in
157 the case of \COQ), is grammatically simple, semantically clear and,
158 especially, is very stable (as kernels of proof assistants
160 So the granularity of the library is at the level of individual
161 objects, that also justifies from another point of view the need
162 for efficient searching techniques for retrieving individual
163 logical items from the repository.
165 The main (possibly only) problem with proof objects is that they are
166 difficult to read and do not directly correspond to what the user typed
167 in. An analogy frequently made in the proof assistant community is that of
168 comparing the vernacular language of scripts to a high level source language
169 and lambda terms to the assembly language they are compiled in. We do not
170 share this view and prefer to look at scripts as an imperative language,
171 and to lambda terms as their denotational semantics; still, however,
172 denotational semantics is possibly more formal but surely not more readable
173 than the imperative source.
175 For all the previous reasons, a huge amount of work inside \MOWGLI{} has
176 been devoted to automatic reconstruction of proofs in natural language
177 from lambda terms. Since lambda terms are in close connection
178 with natural deduction
179 (that is still the most natural logical language discovered so far)
180 the work is not hopeless as it may seem, especially if rendering
181 is combined, as in our case, with dynamic features supporting
182 in-line expansions or contractions of subproofs. The final
183 rendering is probably not entirely satisfactory (see \cite{ida} for a
184 discussion), but surely
185 readable (the actual quality largely depends by the way the lambda
188 Summing up, we already disposed of the following tools/techniques:
190 \item XML specifications for the Calculus of Inductive Constructions,
191 with tools for parsing and saving mathematical objects in such a format;
192 \item metadata specifications and tools for indexing and querying the
194 \item a proof checker (i.e. the {\em kernel} of a proof assistant),
195 implemented to check that we exported form the \COQ{} library all the
196 logically relevant content;
197 \item a sophisticated parser (used by the search engine), able to deal
198 with potentially ambiguous and incomplete information, typical of the
199 mathematical notation \cite{};
200 \item a {\em refiner}, i.e. a type inference system, based on complex
201 existential variables, used by the disambiguating parser;
202 \item complex transformation algorithms for proof rendering in natural
204 \item an innovative rendering widget, supporting high-quality bidimensional
205 rendering, and semantic selection, i.e. the possibility to select semantically
206 meaningful rendering expressions, and to past the respective content into
207 a different text area.
208 \NOTE{il widget non ha sel semantica}
210 Starting from all this, the further step of developing our own
211 proof assistant was too
212 small and too tempting to be neglected. Essentially, we ``just'' had to
213 add an authoring interface, and a set of functionalities for the
214 overall management of the library, integrating everything into a
215 single system. \MATITA{} is the result of this effort.
217 At first sight, \MATITA{} looks as (and partly is) a \COQ{} clone. This is
218 more the effect of the circumstances of its creation described
219 above than the result of a deliberate design. In particular, we
220 (essentially) share the same foundational dialect of \COQ{} (the
221 Calculus of Inductive Constructions), the same implementative
222 language (\OCAML{}), and the same (script based) authoring philosophy.
223 However, as we shall see, the analogy essentially stops here.
225 In a sense; we like to think of \MATITA{} as the way \COQ{} would
226 look like if entirely rewritten from scratch: just to give an
227 idea, although \MATITA{} currently supports almost all functionalities of
228 \COQ{}, it links 60'000 lins of \OCAML{} code, against ... of \COQ{} (and
229 we are convinced that, starting from scratch again, we could furtherly
230 reduce our code in sensible way).\NOTE{righe \COQ{}}
233 \item scelta del sistema fondazionale
234 \item sistema indipendente (da Coq)
236 \item possibilit\`a di sperimentare (soluzioni architetturali, logiche,
237 implementative, \dots)
238 \item compatibilit\`a con sistemi legacy
242 \section{\HELM{} library(??)}
244 \subsection{libreria tutta visibile}
246 \NOTE{assumo che si sia gia' parlato di approccio content-centrico}
247 Our commitment to the content-centric view of the architecture of the system
248 has important consequences on the user's experience and on the functionalities
249 of several components of \MATITA. In the content-centric view the library
250 of mathematical knowledge is an already existent and completely autonomous
251 entity that we are allowed to exploit and augment using \MATITA. Thus, in
252 principle, when the user starts to prove a new theorem she has complete
253 visibility of the library and she can refer to every definition and lemma,
254 also using the mathematical notation already developed. In a similar way,
255 every form of automation of the system must be able to analyze and possibly
256 exploit every notion in the library.
258 The benefits of this approach highly justify the non neglectable price to pay
259 in the development of several components. We analyse now a few of the causes
260 of this additional complexity.
262 \subsubsection{Ambiguity}
263 A rich mathematical library includes equivalent definitions and representations
264 of the same notion. Moreover, mathematical notation inside a rich library is
265 surely highly overloaded. As a consequence every mathematical expression the
266 user provides is highly ambiguous since all the definitions,
267 representations and special notations are available at once to the user.
269 The usual solution to the problem, as adopted for instance in Coq, is to
270 restrict the user's scope to just one interpretation for each definition,
271 representation or notation. In this way much of the ambiguity is removed,
272 burdening the user that must someway declare what is in scope and that must
273 use special syntax when she needs to refer to something not in scope.
275 Even with this approach ambiguity cannot be completely removed since implicit
276 coercions can be arbitrarily inserted by the system to ``change the type''
277 of subterms that do not have the expected type. Usually implicit coercions
278 are used to overcome the absence of subtyping that should mimic the subset
279 relation found in set theory. For instance, the expression
280 $\forall n \in nat. 2 * n * \pi \equiv_\pi 0$ is correct in set theory since
281 the set of natural numbers is a subset of that of real numbers; the
282 corresponding expression $\forall n:nat. 2*n*\pi \equiv_\pi 0$ is not well typed
283 and requires the automatic insertion of the coercion $real_of_nat: nat \to R$
284 either around both 2 and $n$ (to make both products be on real numbers) or
285 around the product $2*n$. The usual approach consists in either rejecting the
286 ambiguous term or arbitrarily choosing one of the interpretations. For instance,
287 Coq rejects the declaration of coercions that have alternatives
288 (i.e. already declared coercions with the same domain and codomain)
289 or that are obtained composing other coercions in order to
290 avoid making several terms highly ambiguous by choosing to insert any one of the
291 alternative coercions. Coq also arbitrarily chooses how to insert coercions in
292 terms to make them well typed when there is more than one possibility (as in
293 the previous example).
295 The approach we are following is radically different. It consists in dealing
296 with ambiguous expressions instead of avoiding them. As a last resource,
297 when the system is unable to disambiguate the input, the user is interactively
298 required to provide more information that is recorded to avoid asking the
299 same question again in subsequent processing of the same input.
300 More details on our approach can be found in \ref{sec:disambiguation}.
302 \subsubsection{Consistency}
303 A large mathematical library is likely to be logically inconsistent.
304 It may contain incompatible axioms or alternative conjectures and it may
305 even use definitions in incompatible ways. To clarify this last point,
306 consider two identical definitions of a set of elements of a given type
307 (or of a category of objects of a given type). Let us call the two definitions
308 $A-Set$ and $B-Set$ (or $A-Category$ and $B-Category$).
309 It is perfectly legitimate to either form the $A-Set$ of every $B-Set$
310 or the $B-Set$ of every $A-Set$ (the same for categories). This just corresponds
311 to assuming that a $B-Set$ (respectively an $A-Set$) is a small set, whereas
312 an $A-Set$ (respectively a $B-Set$) is a big set (possibly of small sets).
313 However, if one part of the library assumes $A-Set$s to be the small ones
314 and another part of the library assumes $B-Set$s to be the small ones, the
315 library as a whole will be logically inconsistent.
317 Logical inconsistency has never been a problem in the daily work of a
318 mathematician. The mathematician simply imposes himself a discipline to
319 restrict himself to consistent subsets of the mathematical knowledge.
320 However, in doing so he doesn't choose the subset in advance by forgetting
321 the rest of his knowledge.
323 Contrarily to a mathematician, the usual tendency in the world of assisted
324 automation is that of restricting in advance the part of the library that
325 will be used later on, checking its consistency by construction.
327 \subsection{ricerca e indicizzazione}
334 \subsection{sostituzioni esplicite vs moduli}
337 \subsection{xml / gestione della libreria}
341 \section{User Interface (da cambiare)}
343 \subsection{assenza di proof tree / resa in linguaggio naturale}
346 \subsection{Disambiguation}
347 \label{sec:disambiguation}
350 \subsubsection{Term input}
352 The primary form of user interaction employed by \MATITA{} is textual script
353 editing: the user modifies it and evaluate step by step its composing
354 \emph{statements}. Examples of statements are inductive type definitions,
355 theorem declarations, LCF-style tacticals, and macros (e.g. \texttt{Check} can
356 be used to ask the system to refine a given term and pretty print the result).
357 Since many statements refer to terms of the underlying calculus, \MATITA{} needs
358 a concrete syntax able to encode terms of the Calculus of Inductive
361 Two of the requirements in the design of such a syntax are apparently in
364 \item the syntax should be as close as possible to common mathematical practice
365 and implement widespread mathematical notations;
366 \item each term described by the syntax should be non-ambiguous meaning that it
367 should exists a function which associates to it a CIC term.
370 These two requirements are addressed in \MATITA{} by the mean of two mechanisms
371 which work together: \emph{term disambiguation} and \emph{extensible notation}.
372 Their interaction is visible in the architecture of the \MATITA{} input phase,
373 depicted in Fig.~\ref{fig:inputphase}. The architecture is articulated as a
374 pipline of three levels: the concrete syntax level (level 0) is the one the user
375 has to deal with when inserting CIC terms; the abstract syntax level (level 2)
376 is an internal representation which intuitively encodes mathematical formulae at
377 the content level~\cite{adams}\cite{mkm-structure}; the last level is that of
382 \includegraphics[width=0.9\textwidth]{input_phase}
383 \caption{\MATITA{} input phase}
385 \label{fig:inputphase}
388 Requirement (1) is addressed by a built-in concrete syntax for terms, described
389 in Tab.~\ref{tab:termsyn}, and the extensible notation mechanisms which offers a
390 way for extending available mathematical notations. Extensible notation, which
391 is also in charge of providing a parsing function mapping concrete syntax terms
392 to content level terms, is described in Sect.~\ref{sec:notation}. Requirement
393 (2) is addressed by the conjunct action of that parsing function and
394 disambiguation which provides a function from content level terms to CIC terms.
396 \subsubsection{Sources of ambiguity}
398 The translation from content level terms to CIC terms is not straightforward
399 because some nodes of the content encoding admit more that one CIC encoding,
400 invalidating requirement (2).
403 \label{ex:disambiguation}
405 Consider the term at the concrete syntax level \texttt{\TEXMACRO{forall} x. x +
406 ln 1 = x} of Fig.~\ref{fig:inputphase}(a), it can be the type of a lemma the
407 user may want to prove. Assuming that both \texttt{+} and \texttt{=} are parsed
408 as infix operators, all the following questions are legitimate and must be
409 answered before obtaining a CIC term from its content level encoding
410 (Fig.~\ref{fig:inputphase}(b)):
414 \item Since \texttt{ln} is an unbound identifier, which CIC constants does it
415 represent? Many different theorems in the library may share its (rather
418 \item Which kind of number (\IN, \IR, \dots) the \texttt{1} literal stand for?
419 Which encoding is used in CIC to represent it? E.g., assuming $1\in\IN$, is
420 it an unary or a binary encoding?
422 \item Which kind of equality the ``='' node represents? Is it Leibniz's
423 polymorhpic equality? Is it a decidable equality over \IN, \IR, \dots?
429 In \MATITA, three \emph{sources of ambiguity} are admitted for content level
430 terms: unbound identifiers, literal numbers, and operators. Each instance of
431 ambiguity sources (ambiguous entity) occuring in a content level term is
432 associated to a \emph{disambiguation domain}. Intuitively a disambiguation
433 domain is a set of CIC terms which may be replaced for an ambiguous entity
434 during disambiguation. Each item of the domain is said to be an
435 \emph{interpretation} for the ambiguous entity.
437 \emph{Unbound identifiers} (question 1) are ambiguous entities since the
438 namespace of CIC objects is not flat and the same identifier may denote many
439 ofthem. For example the short name \texttt{plus\_assoc} in the \HELM{} library
440 is shared by three different theorems stating the associative property of
441 different additions. This kind of ambiguity is avoidable if the user is willing
442 to use long names (in form of URIs in the \texttt{cic://} scheme) in the
443 concrete syntax, with the obvious drawbacks of obtaining long and unreadable
446 Given an unbound identifier, the corresponding disambiguation domain is computed
447 querying the library for all constants, inductive types, and inductive type
448 constructors having it as their short name (see the \LOCATE{} query in
449 Sect.~\ref{sec:metadata}).
451 \emph{Literal numbers} (question 2) are ambiguous entities as well since
452 different kinds of numbers can be encoded in CIC (\IN, \IR, \IZ, \dots) using
453 different encodings. Considering the restricted example of natural numbers we
454 can for instance encode them in CIC using inductive datatypes with a number of
455 constructor equal to the encoding base plus 1, obtaining one encoding for each
458 For each possible way of mapping a literal number to a CIC term, \MATITA{} is
459 aware of a \emph{number intepretation function} which, when applied to the
460 natural number denoted by the literal\footnote{at the moment only literal
461 natural number are supported in the concrete syntax} returns a corresponding CIC
462 term. The disambiguation domain for a given literal number is built applying to
463 the literal all available number interpretation functions in turn.
465 Number interpretation functions can be defined in OCaml or directly using
466 \TODO{notazione per i numeri}.
468 \emph{Operators} (question 3) are intuitively head of applications, as such they
469 are always applied to a (possiblt empty) sequence of arguments. Their ambiguity
470 is a need since it is often the case that some notation is used in an overloaded
471 fashion to hide the use of different CIC constants which encodes similar
472 concepts. For example, in the standard library of \MATITA{} the infix \texttt{+}
473 notation is available building a binary \texttt{Op(+)} node, whose
474 disambiguation domain may refer to different constants like the addition over
475 natural numbers \URI{cic:/matita/nat/plus/plus.con} or that over real numbers of
476 the \COQ{} standard library \URI{cic:/Coq/Reals/Rdefinitions/Rplus.con}.
478 For each possible way of mapping an operator application to a CIC term,
479 \MATITA{} knows an \emph{operator interpretation function} which, when applied
480 to an operator and its arguments, returns a CIC term. The disambiguation domain
481 for a given operator is built applying to the operator and its arguments all
482 available operator interpretation functions in turn.
484 Operator interpretation functions could be added using the
485 \texttt{interpretation} statement. For example, among the first line of the
486 script \FILE{matita/library/logic/equality.ma} from the \MATITA{} standard
490 interpretation "leibnitz's equality"
492 (cic:/matita/logic/equality/eq.ind#xpointer(1/1) _ x y).
495 Evaluating it in \MATITA{} will add an operator interpretation function for the
496 binary operator \texttt{eq} which expands to the CIC term on the right hand side
497 of the statement. That CIC term can be written using only built-in concrete
498 syntax, can contain no ambiguity source; still, it can refer to operator
499 arguments bound on the left hand side and can contain implicit terms (denoted
500 with \texttt{\_}) which will be expanded to fresh metavariables. The latter
501 feature is used in the example above for the first argument of Leibniz's
502 polymorhpic equality.
504 \subsubsection{Disambiguation algorithm}
506 \NOTE{assumo che si sia gia' parlato di refine}
508 A \emph{disambiguation algorithm} takes as input a content level term and return
509 a fully determined CIC term. The key observation on which a disambiguation
510 algorithm is based is that given a content level term with more than one sources
511 of ambiguity, not all possible combination of interpretation lead to a typable
512 CIC term. In the term of Ex.~\ref{ex:disambiguation} for instance the
513 interpretation of \texttt{ln} as a function from \IR to \IR and the
514 interpretation of \texttt{1} as the Peano number $1$ can't coexists. The notion
515 of ``can't coexists'' in the disambiguation of \MATITA{} is inherited from the
516 refiner described in Sect.~\ref{sec:metavariables}: as long as
517 $\mathit{refine}(c)\neq\epsilon$, the combination of interpretation which led to
521 The \emph{naive disambiguation algorithm} takes as input a content level term
522 $t$ and proceeds as follows:
526 \item Create disambiguation domains $\{D_i | i\in\mathit{Dom}(t)\}$, where
527 $\mathit{Dom}(t)$ is the set of ambiguity sources of $t$. Each $D_i$ is a set
528 of CIC terms and can be built as described above.
530 \item Let $\Phi = \{\phi_i | {i\in\mathit{Dom}(t)},\phi_i\in D_i\}$ be an
531 interpretation for $t$. Given $t$ and an interpretation $\Phi$, a CIC term is
532 fully determined. Iterate over all possible interpretations of $t$ and refine
533 the corresponding CIC terms, keep only interpretations which lead to CIC terms
534 $c$ s.t. $\mathit{refine}(c)\neq\epsilon$ (i.e. interpretations that determine
537 \item Let $n$ be the number of interpretations who survived step 2. If $n=0$
538 signal a type error. If $n=1$ we have found exactly one CIC term corresponding
539 to $t$, returns it as output of the disambiguation phase. If $n>1$ we have
540 found many different CIC terms which can correspond to the content level term,
541 let the user choose one of the $n$ interpretations and returns the
546 The above algorithm is highly inefficient since the number of possible
547 interpretations $\Phi$ grows exponentially with the number of ambiguity sources.
548 The actual algorithm used in \MATITA{} is far more efficient being, in the
549 average case, linear in the number of ambiguity sources.
551 The efficient algorithm --- thoroughly described along with an analysis of its
552 complexity in~\cite{disambiguation} --- exploit the refiner and the metavariable
553 extension (Sect.~\ref{sec:metavariables}) of the calculus used in \MATITA.
557 The efficient algorithm can be applied if the logic can be extended with
558 metavariables and a refiner can be implemented. This is the case for CIC and
559 several other logics.
560 \emph{Metavariables}~\cite{munoz} are typed, non linear placeholders that can
561 occur in terms; $?_i$ usually denotes the $i$-th metavariable, while $?$ denotes
562 a freshly created metavariable. A \emph{refiner}~\cite{McBride} is a
563 function whose input is a term with placeholders and whose output is either a
564 new term obtained instantiating some placeholder or $\epsilon$, meaning that no
565 well typed instantiation could be found for the placeholders occurring in
566 the term (type error).
568 The efficient algorithm starts with an interpretation $\Phi_0 = \{\phi_i |
569 \phi_i = ?, i\in\mathit{Dom}(t)\}$,
570 which associates a fresh metavariable to each
571 source of ambiguity. Then it iterates refining the current CIC term (i.e. the
572 term obtained interpreting $t$ with $\Phi_i$). If the refinement succeeds the
573 next interpretation $\Phi_{i+1}$ will be created \emph{making a choice}, that is
574 replacing a placeholder with one of the possible choice from the corresponding
575 disambiguation domain. The placeholder to be replaced is chosen following a
576 preorder visit of the ambiguous term. If the refinement fails the current set of
577 choices cannot lead to a well-typed term and backtracking is attempted.
578 Once an unambiguous correct interpretation is found (i.e. $\Phi_i$ does no
579 longer contain any placeholder), backtracking is attempted
580 anyway to find the other correct interpretations.
582 The intuition which explain why this algorithm is more efficient is that as soon
583 as a term containing placeholders is not typable, no further instantiation of
584 its placeholders could lead to a typable term. For example, during the
585 disambiguation of user input \texttt{\TEXMACRO{forall} x. x*0 = 0}, an
586 interpretation $\Phi_i$ is encountered which associates $?$ to the instance
587 of \texttt{0} on the right, the real number $0$ to the instance of \texttt{0} on
588 the left, and the multiplication over natural numbers (\texttt{mult} for short)
589 to \texttt{*}. The refiner will fail, since \texttt{mult} require a natural
590 argument, and no further instantiation of the placeholder will be tried.
592 If, at the end of the disambiguation, more than one possible interpretations are
593 possible, the user will be asked to choose the intended one (see
594 Fig.~\ref{fig:disambiguation}).
597 % \centerline{\includegraphics[width=0.9\textwidth]{disambiguation-1}}
598 \caption{\label{fig:disambiguation} Disambiguation: interpretation choice}
601 Details of the disambiguation algorithm of \WHELP{} can
602 be found in~\cite{disambiguation}, where an equivalent algorithm
603 that avoids backtracking is also presented.
606 \subsection{notazione}
613 \subsection{selezione semantica, cut paste, hyperlink}
618 Patterns are the textual counterpart of the MathML widget graphical
621 Matita benefits of a graphical interface and a powerful MathML rendering
622 widget that allows the user to select pieces of the sequent he is working
623 on. While this is an extremely intuitive way for the user to
624 restrict the application of tactics, for example, to some subterms of the
625 conclusion or some hypothesis, the way this action is recorded to the text
626 script is not obvious.\\
627 In \MATITA{} this issue is addressed by patterns.
629 \subsubsection{Pattern syntax}
630 A pattern is composed of two terms: a $\NT{sequent\_path}$ and a
632 The former mocks-up a sequent, discharging unwanted subterms with $?$ and
633 selecting the interesting parts with the placeholder $\%$.
634 The latter is a term that lives in the context of the placeholders.
636 The concrete syntax is reported in table \ref{tab:pathsyn}
637 \NOTE{uso nomi diversi dalla grammatica ma che hanno + senso}
639 \caption{\label{tab:pathsyn} Concrete syntax of \MATITA{} patterns.\strut}
642 \begin{array}{@{}rcll@{}}
644 ::= & [~\verb+in match+~\NT{wanted}~]~[~\verb+in+~\NT{sequent\_path}~] & \\
646 ::= & \{~\NT{ident}~[~\verb+:+~\NT{multipath}~]~\}~
647 [~\verb+\vdash+~\NT{multipath}~] & \\
648 \NT{wanted} & ::= & \NT{term} & \\
649 \NT{multipath} & ::= & \NT{term\_with\_placeholders} & \\
655 \subsubsection{How patterns work}
656 Patterns mimic the user's selection in two steps. The first one
657 selects roots (subterms) of the sequent, using the
658 $\NT{sequent\_path}$, while the second
659 one searches the $\NT{wanted}$ term starting from these roots. Both are
660 optional steps, and by convention the empty pattern selects the whole
665 concerns only the $[~\verb+in+~\NT{sequent\_path}~]$
666 part of the syntax. $\NT{ident}$ is an hypothesis name and
667 selects the assumption where the following optional $\NT{multipath}$
668 will operate. \verb+\vdash+ can be considered the name for the goal.
669 If the whole pattern is omitted, the whole goal will be selected.
670 If one or more hypotheses names are given the selection is restricted to
671 these assumptions. If a $\NT{multipath}$ is omitted the whole
672 assumption is selected. Remember that the user can be mostly
673 unaware of this syntax, since the system is able to write down a
674 $\NT{sequent\_path}$ starting from a visual selection.
675 \NOTE{Questo ancora non va in matita}
677 A $\NT{multipath}$ is a CiC term in which a special constant $\%$
679 The roots of discharged subterms are marked with $?$, while $\%$
680 is used to select roots. The default $\NT{multipath}$, the one that
681 selects the whole term, is simply $\%$.
682 Valid $\NT{multipath}$ are, for example, $(?~\%~?)$ or $\%~\verb+\to+~(\%~?)$
683 that respectively select the first argument of an application or
684 the source of an arrow and the head of the application that is
685 found in the arrow target.
687 The first phase selects not only terms (roots of subterms) but also
688 their context that will be eventually used in the second phase.
691 plays a role only if the $[~\verb+in match+~\NT{wanted}~]$
692 part is specified. From the first phase we have some terms, that we
693 will see as subterm roots, and their context. For each of these
694 contexts the $\NT{wanted}$ term is disambiguated in it and the
695 corresponding root is searched for a subterm $\alpha$-equivalent to
696 $\NT{wanted}$. The result of this search is the selection the
702 Since the first step is equipotent to the composition of the two
703 steps, the system uses it to represent each visual selection.
704 The second step is only meant for the
705 experienced user that writes patterns by hand, since it really
706 helps in writing concise patterns as we will see in the
709 \subsubsection{Examples}
710 To explain how the first step works let's give an example. Consider
711 you want to prove the uniqueness of the identity element $0$ for natural
712 sum, and that you can relay on the previously demonstrated left
713 injectivity of the sum, that is $inj\_plus\_l:\forall x,y,z.x+y=z+y \to x =z$.
716 theorem valid_name: \forall n,m. m + n = n \to m = O.
720 leads you to the following sequent
728 where you want to change the right part of the equivalence of the $H$
729 hypothesis with $O + n$ and then use $inj\_plus\_l$ to prove $m=O$.
731 change in H:(? ? ? %) with (O + n).
734 This pattern, that is a simple instance of the $\NT{sequent\_path}$
735 grammar entry, acts on $H$ that has type (without notation) $(eq~nat~(m+n)~n)$
736 and discharges the head of the application and the first two arguments with a
737 $?$ and selects the last argument with $\%$. The syntax may seem uncomfortable,
738 but the user can simply select with the mouse the right part of the equivalence
739 and left to the system the burden of writing down in the script file the
740 corresponding pattern with $?$ and $\%$ in the right place (that is not
741 trivial, expecially where implicit arguments are hidden by the notation, like
742 the type $nat$ in this example).
744 Changing all the occurrences of $n$ in the hypothesis $H$ with $O+n$
745 works too and can be done, by the experienced user, writing directly
746 a simpler pattern that uses the second phase.
748 change in match n in H with (O + n).
751 In this case the $\NT{sequent\_path}$ selects the whole $H$, while
752 the second phase searches the wanted $n$ inside it by
753 $\alpha$-equivalence. The resulting
754 equivalence will be $m+(O+n)=O+n$ since the second phase found two
755 occurrences of $n$ in $H$ and the tactic changed both.
757 Just for completeness the second pattern is equivalent to the
758 following one, that is less readable but uses only the first phase.
760 change in H:(? ? (? ? %) %) with (O + n).
764 \subsubsection{Tactics supporting patterns}
765 In \MATITA{} all the tactics that can be restricted to subterm of the working
766 sequent accept the pattern syntax. In particular these tactics are: simplify,
767 change, fold, unfold, generalize, replace and rewrite.
769 \NOTE{attualmente rewrite e fold non supportano phase 2. per
770 supportarlo bisogna far loro trasformare il pattern phase1+phase2
771 in un pattern phase1only come faccio nell'ultimo esempio. lo si fa
772 con una pattern\_of(select(pattern))}
774 \subsubsection{Comparison with Coq}
775 Coq has a two diffrent ways of restricting the application of tactis to
776 subterms of the sequent, both relaying on the same special syntax to identify
779 The first way is to use this special syntax to specify directly to the
780 tactic the occurrnces of a wanted term that should be affected, while
781 the second is to prepare the sequent with another tactic called
782 pattern and the apply the real tactic. Note that the choice is not
783 left to the user, since some tactics needs the sequent to be prepared
784 with pattern and do not accept directly this special syntax.
786 The base idea is that to identify a subterm of the sequent we can
787 write it and say that we want, for example, the third and the fifth
788 occurce of it (counting from left to right). In our previous example,
789 to change only the left part of the equivalence, the correct command
792 change n at 2 in H with (O + n)
795 meaning that in the hypothesis $H$ the $n$ we want to change is the
796 second we encounter proceeding from left toright.
798 The tactic pattern computes a
799 $\beta$-expansion of a part of the sequent with respect to some
800 occurrences of the given term. In the previous example the following
806 would have resulted in this sequent
810 H : (fun n0 : nat => m + n = n0) n
811 ============================
815 where $H$ is $\beta$-expanded over the second $n$
816 occurrence. This is a trick to make the unification algorithm ignore
817 the head of the application (since the unification is essentially
818 first-order) but normally operate on the arguments.
819 This works for some tactics, like rewrite and replace,
820 but for example not for change and other tactics that do not relay on
823 The idea behind this way of identifying subterms in not really far
824 from the idea behind patterns, but really fails in extending to
825 complex notation, since it relays on a mono-dimensional sequent representation.
826 Real math notation places arguments upside-down (like in indexed sums or
827 integrations) or even puts them inside a bidimensional matrix.
828 In these cases using the mouse to select the wanted term is probably the
829 only way to tell the system exactly what you want to do.
831 One of the goals of \MATITA{} is to use modern publishing techiques, and
832 adopting a method for restricting tactics application domain that discourages
833 using heavy math notation, would definitively be a bad choice.
835 \subsection{tatticali}
838 \subsection{named variable e disambiguazione lazy}
841 \subsection{metavariabili}
842 \label{sec:metavariables}
849 \section{Drawbacks, missing, \dots}
857 \subsection{estrazione}
860 \subsection{localizzazione errori}
864 We would like to thank all the students that during the past
865 five years collaborated in the \HELM{} project and contributed to
866 the development of Matita, and in particular
867 A.Griggio, F.Guidi, P. Di Lena, L.Padovani, I.Schena, M.Selmi,
872 \bibliography{matita}