1 \documentclass[]{kluwer}
9 \newcommand{\component}{component}
10 \newcommand{\components}{components}
12 \newcommand{\AUTO}{\textsc{Auto}}
13 \newcommand{\BOXML}{BoxML}
14 \newcommand{\COQ}{Coq}
15 \newcommand{\COQIDE}{CoqIde}
16 \newcommand{\ELIM}{\textsc{Elim}}
17 \newcommand{\GDOME}{Gdome}
18 \newcommand{\GTK}{GTK+}
19 \newcommand{\GTKMATHVIEW}{\textsc{GtkMathView}}
20 \newcommand{\HELM}{Helm}
21 \newcommand{\HINT}{\textsc{Hint}}
22 \newcommand{\IN}{\ensuremath{\dN}}
23 \newcommand{\INSTANCE}{\textsc{Instance}}
24 \newcommand{\IR}{\ensuremath{\dR}}
25 \newcommand{\IZ}{\ensuremath{\dZ}}
26 \newcommand{\LIBXSLT}{LibXSLT}
27 \newcommand{\LEGO}{Lego}
28 \newcommand{\LOCATE}{\textsc{Locate}}
29 \newcommand{\MATCH}{\textsc{Match}}
30 \newcommand{\MATHML}{MathML}
31 \newcommand{\MATITA}{Matita}
32 \newcommand{\MATITAC}{\texttt{matitac}}
33 \newcommand{\MATITADEP}{\texttt{matitadep}}
34 \newcommand{\MOWGLI}{MoWGLI}
35 \newcommand{\MOWGLIIST}{IST-2001-33562}
36 \newcommand{\NAT}{\ensuremath{\mathit{nat}}}
37 \newcommand{\NATIND}{\mathit{nat\_ind}}
38 \newcommand{\NUPRL}{NuPRL}
39 \newcommand{\OCAML}{OCaml}
40 \newcommand{\PROP}{\mathit{Prop}}
41 \newcommand{\REF}[3]{\ensuremath{\mathit{Ref}_{#1}(#2,#3)}}
42 \newcommand{\REWRITEHINT}{\textsc{RewriteHint}}
43 \newcommand{\TEXMACRO}[1]{\texttt{\char92 #1}}
44 \newcommand{\UWOBO}{UWOBO}
45 \newcommand{\GETTER}{Getter}
46 \newcommand{\WHELP}{Whelp}
48 \newcommand{\DOT}{\ensuremath{\mbox{\textbf{.}}}}
49 \newcommand{\SEMICOLON}{\ensuremath{\mbox{\textbf{;}}}}
50 \newcommand{\BRANCH}{\ensuremath{\mbox{\textbf{[}}}}
51 \newcommand{\SHIFT}{\ensuremath{\mbox{\textbf{\textbar}}}}
52 \newcommand{\POS}[1]{\ensuremath{#1\mbox{\textbf{:}}}}
53 \newcommand{\MERGE}{\ensuremath{\mbox{\textbf{]}}}}
54 \newcommand{\FOCUS}[1]{\ensuremath{\mathtt{focus}~#1}}
55 \newcommand{\UNFOCUS}{\ensuremath{\mathtt{unfocus}}}
56 \newcommand{\SKIP}{\MATHTT{skip}}
57 \newcommand{\TACTIC}[1]{\ensuremath{\mathtt{tactic}~#1}}
59 \newcommand{\NT}[1]{\ensuremath{\langle\mathit{#1}\rangle}}
60 \newcommand{\URI}[1]{\texttt{#1}}
61 \newcommand{\OP}[1]{``\texttt{#1}''}
62 \newcommand{\FILE}[1]{\texttt{#1}}
63 \newcommand{\TAC}[1]{\texttt{#1}}
64 \newcommand{\TODO}[1]{\textbf{TODO: #1}}
66 \definecolor{gray}{gray}{0.85} % 1 -> white; 0 -> black
68 \newenvironment{grafite}{\VerbatimEnvironment
69 \begin{SaveVerbatim}{boxtmp}}%
70 {\end{SaveVerbatim}\setlength{\fboxsep}{3mm}%
72 \fcolorbox{black}{gray}{\BUseVerbatim[boxwidth=0.9\linewidth]{boxtmp}}
76 \newcommand{\PASS}{\stepcounter{pass}\arabic{pass}}
79 \newcommand{\sequent}[2]{
80 \savebox{\tmpxyz}[0.9\linewidth]{
81 \begin{minipage}{0.9\linewidth}
85 \end{minipage}}\setlength{\fboxsep}{3mm}%
87 \fcolorbox{black}{gray}{\usebox{\tmpxyz}}
90 \bibliographystyle{klunum}
95 \title{The \MATITA{} Proof Assistant}
97 \author{Andrea \surname{Asperti} \email{asperti@cs.unibo.it}}
98 \author{Claudio \surname{Sacerdoti Coen} \email{sacerdot@cs.unibo.it}}
99 \author{Enrico \surname{Tassi} \email{tassi@cs.unibo.it}}
100 \author{Stefano \surname{Zacchiroli} \email{zacchiro@cs.unibo.it}}
102 \institute{Department of Computer Science, University of Bologna\\
103 Mura Anteo Zamboni, 7 --- 40127 Bologna, ITALY}
105 \runningtitle{The \MATITA{} proof assistant}
106 \runningauthor{Asperti, Sacerdoti Coen, Tassi, Zacchiroli}
109 ``We are nearly bug-free'' -- \emph{CSC, Oct 2005}
113 \TODO{scrivere abstract}
116 \keywords{Proof Assistant, Mathematical Knowledge Management, XML, Authoring,
120 % toc & co: to be removed in the final paper version
125 \section{Introduction}
128 \MATITA{} is the Proof Assistant under development by the \HELM{}
129 team~\cite{mkm-helm} at the University of Bologna, under the direction of
130 Prof.~Asperti. This paper describes the overall architecture of
131 the system, focusing on its most distinctive and innovative
134 \subsection{Historical perspective}
136 The origins of \MATITA{} go back to 1999. At the time we were mostly
137 interested in developing tools and techniques to enhance the accessibility
138 via Web of libraries of formalized mathematics. Due to its dimension, the
139 library of the \COQ~\cite{CoqManual} proof assistant (of the order of
141 was chosen as a privileged test bench for our work, although experiments
142 have been also conducted with other systems, and notably
143 with \NUPRL~\cite{nuprl-book}.
144 The work, mostly performed in the framework of the recently concluded
145 European project \MOWGLIIST{} \MOWGLI~\cite{pechino}, mainly consisted in the
149 \item exporting the information from the internal representation of
150 \COQ{} to a system and platform independent format. Since XML was at
151 the time an emerging standard, we naturally adopted that technology,
152 fostering a content-centric architecture~\cite{content-centric} where
153 the documents of the library were the the main components around which
154 everything else has to be built;
156 \item developing indexing and searching techniques supporting semantic
157 queries to the library;
159 \item developing languages and tools for a high-quality notational
160 rendering of mathematical information.\footnote{We have been active in
161 the \MATHML{} Working group since 1999.}
165 According to our content-centric commitment, the library exported from
166 \COQ{} was conceived as being distributed and most of the tools were developed
167 as Web services. The user can interact with the library and the tools by
168 means of a Web interface that orchestrates the Web services.
170 Web services and other tools have been implemented as front-ends
171 to a set of software components, collectively called the \HELM{} components.
172 At the end of the \MOWGLI{} project we already disposed of the following
173 tools and software components:
176 \item XML specifications for the Calculus of Inductive Constructions,
177 with components for parsing and saving mathematical objects in such a
178 format~\cite{exportation-module};
180 \item metadata specifications with components for indexing and querying the
183 \item a proof checker (i.e. the \emph{kernel} of a proof assistant),
184 implemented to check that we exported from the \COQ{} library all the
185 logically relevant content;
187 \item a sophisticated term parser (used by the search engine), able to deal
188 with potentially ambiguous and incomplete information, typical of the
189 mathematical notation~\cite{disambiguation};
191 \item a \emph{refiner} component, i.e. a type inference system, based on
192 partially specified terms, used by the disambiguating parser;
194 \item complex transformation algorithms for proof rendering in natural
195 language~\cite{remathematization};
197 \item an innovative, \MATHML-compliant rendering widget~\cite{padovani}
198 for the \GTK{} graphical environment,\footnote{\url{http://www.gtk.org/}}
199 supporting high-quality bidimensional
200 rendering, and semantic selection, i.e. the possibility to select semantically
201 meaningful rendering expressions, and to paste the respective content into
202 a different text area.
206 Starting from all this, developing our own proof assistant was not
207 too far away: essentially, we ``just'' had to
208 add an authoring interface, and a set of functionalities for the
209 overall management of the library, integrating everything into a
210 single system. \MATITA{} is the result of this effort.
212 \subsection{The system}
214 \MATITA{} is a proof assistant (also called interactive theorem prover).
215 It is based on the Calculus of (Co)Inductive Constructions
216 (CIC)~\cite{Werner} that is a dependently typed lambda-calculus \`a la
217 Church enriched with primitive inductive and co-inductive data types.
218 Via the Curry-Howard isomorphism, the calculus can be seen as a very
219 rich higher order logic and proofs can be simply represented and
220 stored as lambda-terms. \COQ{} and \LEGO~\cite{lego} are other systems
221 that adopt (variations of) CIC as their foundation.
223 The proof language of \MATITA{} is procedural, in the tradition of the LCF
224 theorem prover~\cite{lcf}. \COQ, \NUPRL, PVS, Isabelle are all examples of
226 whose proof language is procedural. Traditionally, in a procedural system
227 the user interacts only with the \emph{script}, while proof terms are internal
228 records kept by the system. On the contrary, in \MATITA{} proof terms are
229 praised as declarative versions of the proof. Playing that role, they are the
230 primary mean of communication of proofs (once rendered to natural language
231 for human audiences).
233 The user interfaces now adopted by all the proof assistants based on a
234 procedural proof language have been inspired by the CtCoq pioneering
235 system~\cite{ctcoq1}. One successful incarnation of the ideas introduced
236 by CtCoq is the Proof General generic interface~\cite{proofgeneral},
237 that has set a sort of
238 standard way to interact with the system. Several procedural proof assistants
239 have either adopted or cloned Proof General as their main user interface.
240 The authoring interface of \MATITA{} is a clone of the Proof General interface.
241 On the contrary, the interface to interact with the library is rather
242 innovative and directly inspired by the Web interfaces to our Web servers.
244 \MATITA{} is backward compatible with the XML library of proof objects exported
245 from \COQ{}, but, in order to test the actual usability of the system, we are
246 also developing a new library of basic results from scratch.
248 \subsection{Relationship with \COQ{}}
250 At first sight, \MATITA{} looks as (and partly is) a \COQ{} clone. This is
251 more the effect of the circumstances of its creation described
252 above than the result of a deliberate design. In particular, we
253 (essentially) share the same foundational dialect of \COQ{} (the
254 Calculus of (Co)Inductive Constructions), the same implementation
255 language (\OCAML\footnote{\url{http://caml.inria.fr/}}),
256 and the same (procedural, script based) authoring philosophy.
257 However, the analogy essentially stops here and no code is shared
258 between the two systems.
260 In a sense, we like to think of \MATITA{} as the way \COQ{} would
261 look like if entirely rewritten from scratch: just to give an
262 idea, although \MATITA{} currently supports almost all functionalities of
263 \COQ{}, it links 60'000 lines of \OCAML{} code, against the 166'000 lines linked
264 by \COQ{} (and we are convinced that, starting from scratch again,
265 we could reduce our code even further in a sensible way).
267 Moreover, the complexity of the code of \MATITA{} is greatly reduced with
268 respect to \COQ. For instance, the API of the components of \MATITA{} comprise
269 989 functions, to be compared with the 4'286 functions of \COQ.
271 Finally, \MATITA{} has several innovative features over \COQ{} that derive
272 from the integration of Mathematical Knowledge Management tools with proof
273 assistants. Among them, the advanced indexing tools over the library and
274 the parser for ambiguous mathematical notation.
276 The size and complexity improvements over \COQ{} must be understood
277 historically. \COQ{}\cite{CoqArt} is a quite old
278 system whose development started 20 years ago. Since then,
279 several developers have took over the code and several new research ideas
280 that were not considered in the original architecture have been experimented
281 and integrated in the system. Moreover, there exists a lot of developments
282 for \COQ{} that require backward compatibility between each pair of releases;
283 since many central functionalities of a proof assistant are based on heuristics
284 or arbitrary choices to overcome undecidability (e.g. for higher order
285 unification), changing these functionalities maintaining backward compatibility
286 is very difficult. Finally, the code of \COQ{} has been greatly optimized
287 over the years; optimization reduces maintainability and rises the complexity
290 In writing \MATITA{} we have not been hindered by backward compatibility and
291 we have took advantage of the research results and experiences previously
292 developed by others, comprising the authors of \COQ. Moreover, starting from
293 scratch, we have designed in advance the architecture and we have split
294 the code in coherent minimally coupled components.
296 In the future we plan to exploit \MATITA{} as a test bench for new ideas and
297 extensions. Keeping the single components and the whole architecture as
298 simple as possible is thus crucial to foster future experiments and to
299 allow other developers to quickly understand our code and contribute.
301 %For direct experience of the authors, the learning curve to understand and
302 %be able to contribute to \COQ{}'s code is quite steep and requires direct
303 %and frequent interactions with \COQ{} developers.
305 \section{Architecture}
310 \includegraphics[width=0.9\textwidth,height=0.8\textheight]{pics/libraries-clusters}
311 \caption[\MATITA{} components and related applications]{\MATITA{}
312 components and related applications, with thousands of line of
314 \label{fig:libraries}
318 Fig.~\ref{fig:libraries} shows the architecture of the \emph{\components}
319 (circle nodes) and \emph{applications} (squared nodes) developed in the
320 \HELM{} project. Each node is annotated with the number of lines of
321 source code (comprising comments).
323 Applications and \components{} depend on other \components{} forming a
324 directed acyclic graph (DAG). Each \component{} can be decomposed in
325 a set of \emph{modules} also forming a DAG.
327 Modules and \components{} provide coherent sets of functionalities
328 at different scales. Applications that require only a few functionalities
329 depend on a restricted set of \components.
331 Only the proof assistant \MATITA{} and the \WHELP{} search engine are
332 applications meant to be used directly by the user. All the other applications
333 are Web services developed in the \HELM{} and \MOWGLI{} projects and already
334 described elsewhere. In particular:
337 \item The \emph{\GETTER}~\cite{zack-master} is a Web service to
338 retrieve an (XML) document from a physical location (URL) given its
339 logical name (URI). The Getter is responsible of updating a table that
340 maps URIs to URLs. Thanks to the Getter it is possible to work on a
341 logically monolithic library that is physically distributed on the
344 \item \emph{\WHELP}~\cite{whelp} is a search engine to index and
345 locate mathematical concepts (axioms, theorems, definitions) in the
346 logical library managed by the Getter. Typical examples of
347 \WHELP{} queries are those that search for a theorem that generalize or
348 instantiate a given formula, or that can be immediately applied to
349 prove a given goal. The output of Whelp is an XML document that lists
350 the URIs of a complete set of candidates that are likely to satisfy
351 the given query. The set is complete in the sense that no concept that
352 actually satisfies the query is thrown away. However, the query is
353 only approximated in the sense that false matches can be returned.
355 \item \emph{\UWOBO}~\cite{zack-master} is a Web service that, given the
356 URI of a mathematical concept in the distributed library, renders it
357 according to the user provided two dimensional mathematical notation.
358 \UWOBO{} may also inline the rendering of mathematical concepts into
359 arbitrary documents before returning them. The Getter is used by
360 \UWOBO{} to retrieve the document to be rendered.
362 \item The \emph{Proof Checker}~\cite{zack-master} is a Web service
363 that, given the URI of a concept in the distributed library, checks its
364 correctness. Since the concept is likely to depend in an acyclic way
365 on other concepts, the proof checker is also responsible of building
366 in a top-down way the DAG of all dependencies, checking in turn every
367 concept for correctness.
369 \item The \emph{Dependency Analyzer}~\cite{zack-master} is a Web
370 service that can produce a textual or graphical representation of the
371 dependencies of a concept.
375 The dependency of a \component{} or application over another \component{} can
376 be satisfied by linking the \component{} in the same executable.
377 For those \components{} whose functionalities are also provided by the
378 aforementioned Web services, it is also possible to link stub code that
379 forwards the request to a remote Web service. For instance, the
380 \GETTER{} application is just a wrapper to the \GETTER{} \component{}
381 that allows it to be used as a Web service. \MATITA{} can directly link
382 the code of the \GETTER{} \component, or it can use a stub library with
383 the same API that forwards every request to the Web service.
385 To better understand the architecture of \MATITA{} and the role of each
386 \component, we can focus on the representation of the mathematical
387 information. In CIC terms are used to represent mathematical formulae,
388 types and proofs. \MATITA{} is able to handle terms at four different
389 levels of specification. On each level it is possible to provide a
390 different set of functionalities. The four different levels are: fully
391 specified terms; partially specified terms; content level terms;
392 presentation level terms.
394 \subsection{Fully specified terms}
395 \label{sec:fullyintro}
397 \emph{Fully specified terms} are CIC terms where no information is
398 missing or left implicit. A fully specified term should be well-typed.
399 The mathematical concepts (axioms, definitions, theorems) that are stored
400 in our mathematical library are fully specified and well-typed terms.
401 Fully specified terms are extremely verbose (to make type-checking
402 decidable). Their syntax is fixed and does not resemble the usual
403 extendible mathematical notation. They are not meant for direct user
406 The \texttt{cic} \component{} defines the data type that represents CIC terms
407 and provides a parser for terms stored in XML format.
409 The most important \component{} that deals with fully specified terms is
410 \texttt{cic\_proof\_checking}. It implements the procedure that verifies
411 if a fully specified term is well-typed. It also implements the
412 \emph{conversion} judgement that verifies if two given terms are
413 computationally equivalent (i.e. they share the same normal form).
415 Terms may reference other mathematical concepts in the library.
416 One commitment of our project is that the library should be physically
417 distributed. The \GETTER{} \component{} manages the distribution,
418 providing a mapping from logical names (URIs) to the physical location
419 of a concept (an URL). The \texttt{urimanager} \component{} provides the URI
420 data type and several utility functions over URIs. The
421 \texttt{cic\_proof\_checking} \component{} calls the \GETTER{}
422 \component{} every time it needs to retrieve the definition of a mathematical
423 concept referenced by a term that is being type-checked.
425 The Proof Checker application is the Web service that provides an interface
426 to the \texttt{cic\_proof\_checking} \component.
428 We use metadata and a sort of crawler to index the mathematical concepts
429 in the distributed library. We are interested in retrieving a concept
430 by matching, instantiation or generalization of a user or system provided
431 mathematical formula. Thus we need to collect metadata over the fully
432 specified terms and to store the metadata in some kind of (relational)
433 database for later usage. The \texttt{hmysql} \component{} provides
435 interface to a (possibly remote) MySQL\footnote{\url{http://www.mysql.com/}}
436 database system used to store the metadata.
437 The \texttt{metadata} \component{} defines the data type of the metadata
438 we are collecting and the functions that extracts the metadata from the
439 mathematical concepts (the main functionality of the crawler).
440 The \texttt{whelp} \component{} implements a search engine that performs
441 approximated queries by matching/instantiation/generalization. The queries
442 operate only on the metadata and do not involve any actual matching
443 (see the \texttt{cic\_unification} \component in
444 Sect.~\ref{sec:partiallyintro}). Not performing any actual matching
445 a query only returns a complete and hopefully small set of matching
446 candidates. The process that has issued the query is responsible of
447 actually retrieving from the distributed library the candidates to prune
448 out false matches if interested in doing so.
450 The \WHELP{} application is the Web service that provides an interface to
451 the \texttt{whelp} \component.
453 According to our vision, the library is developed collaboratively so that
454 changing or removing a concept can invalidate other concepts in the library.
455 Moreover, changing or removing a concept requires a corresponding change
456 in the metadata database. The \texttt{library} \component{} is responsible
457 of preserving the coherence of the library and the database. For instance,
458 when a concept is removed, all the concepts that depend on it and their
459 metadata are removed from the library. This aspect will be better detailed
460 in Sect.~\ref{sec:libmanagement}.
462 \subsection{Partially specified terms}
463 \label{sec:partiallyintro}
465 \emph{Partially specified terms} are CIC terms where subterms can be omitted.
466 Omitted subterms can bear no information at all or they may be associated to
467 a sequent. The formers are called \emph{implicit terms} and they occur only
468 linearly. The latters may occur multiple times and are called
469 \emph{metavariables}~\cite{geuvers-jojgov,munoz}.
470 An \emph{explicit substitution} is applied to each
471 occurrence of a metavariable. A metavariable stands for a term whose type is
472 given by the conclusion of the sequent. The term must be closed in the
473 context that is given by the ordered list of hypotheses of the sequent.
474 The explicit substitution instantiates every hypothesis with an actual
475 value for the variable bound by the hypothesis.
477 Partially specified terms are not required to be well-typed. However a
478 partially specified term should be \emph{refinable}. A \emph{refiner} is
479 a type-inference procedure that can instantiate implicit terms and
480 metavariables and that can introduce
481 \emph{implicit coercions}~\cite{barthe95implicit} to make a
482 partially specified term well-typed. The refiner of \MATITA{} is implemented
483 in the \texttt{cic\_unification} \component. As the type checker is based on
484 the conversion check, the refiner is based on \emph{unification}~\cite{strecker}
485 that is a procedure that makes two partially specified term convertible by
486 instantiating as few as possible metavariables that occur in them.
488 Since terms are used in CIC to represent proofs, correct incomplete
489 proofs are represented by refinable partially specified terms. The metavariables
490 that occur in the proof correspond to the conjectures still to be proved.
491 The sequent associated to the metavariable is the conjecture the user needs to
494 \emph{Tactics} are the procedures that the user can apply to progress in the
495 proof. A tactic proves a conjecture possibly creating new (and hopefully
496 simpler) conjectures. The implementation of tactics is given in the
497 \texttt{tactics} \component. It is heavily based on the refinement and
498 unification procedures of the \texttt{cic\_unification} \component.
500 The \texttt{grafite} \component{} defines the abstract syntax tree (AST) for the
501 commands of the \MATITA{} proof assistant. Most of the commands are tactics.
502 Other commands are used to give definitions and axioms or to state theorems
503 and lemmas. The \texttt{grafite\_engine} \component{} is the core of \MATITA.
504 It implements the semantics of each command in the grafite AST as a function
505 from status to status. It implements also an undo function to go back to
508 As fully specified terms, partially specified terms are not well suited
509 for user consumption since their syntax is not extendible and it is not
510 possible to adopt the usual mathematical notation. However they are already
511 an improvement over fully specified terms since they allow to omit redundant
512 information that can be inferred by the refiner.
514 \subsection{Content level terms}
515 \label{sec:contentintro}
517 The language used to communicate proofs and especially formulae with the
518 user does not only needs to be extendible and accommodate the usual mathematical
519 notation. It must also reflect the comfortable degree of imprecision and
520 ambiguity that the mathematical language provides.
522 For instance, it is common practice in mathematics to speak of a generic
523 equality that can be used to compare any two terms. However, it is well known
524 that several equalities can be distinguished as soon as we care for decidability
525 or for their computational properties. For instance equality over real
526 numbers is well known to be undecidable, whereas it is decidable over
529 Similarly, we usually speak of natural numbers and their operations and
530 properties without caring about their representation. However the computational
531 properties of addition over the binary representation are very different from
532 those of addition over the unary representation. And addition over two natural
533 numbers is definitely different from addition over two real numbers.
535 Formalized mathematics cannot hide these differences and obliges the user to be
536 very precise on the types he is using and their representation. However,
537 to communicate formulae with the user and with external tools, it seems good
538 practice to stick to the usual imprecise mathematical ontology. In the
539 Mathematical Knowledge Management community this imprecise language is called
540 the \emph{content level}~\cite{adams} representation of formulae.
542 In \MATITA{} we provide translations from partially specified terms
543 to content level terms and the other way around. The first translation can also
544 be applied to fully specified terms since a fully specified term is a special
545 case of partially specified term where no metavariable or implicit term occurs.
547 The translation from partially specified terms to content level terms must
548 discriminate between terms used to represent proofs and terms used to represent
549 formulae. The firsts are translated to a content level representation of
550 proof steps that can in turn easily be rendered in natural language
551 using techniques inspired by~\cite{natural,YANNTHESIS}. The representation
552 adopted has greatly influenced the OMDoc~\cite{omdoc} proof format that is now
553 isomorphic to it. Terms that represent formulae are translated to \MATHML{}
554 Content formulae. \MATHML{} Content~\cite{mathml} is a W3C standard
555 for the representation of content level formulae in an extensible XML format.
557 The translation to content level is implemented in the
558 \texttt{acic\_content} \component. Its input are \emph{annotated partially
559 specified terms}, that are maximally unshared
560 partially specified terms enriched with additional typing information for each
561 subterm. This information is used to discriminate between terms that represent
562 proofs and terms that represent formulae. Part of it is also stored at the
563 content level since it is required to generate the natural language rendering
564 of proofs. The terms need to be maximally unshared (i.e. they must be a tree
565 and not a DAG). The reason is that to different occurrences of a subterm
566 we need to associate different typing information.
567 This association is made easier when the term is represented as a tree since
568 it is possible to label each node with an unique identifier and associate
569 the typing information using a map on the identifiers.
570 The \texttt{cic\_acic} \component{} unshares and annotates terms. It is used
571 by the \texttt{library} \component{} since fully specified terms are stored
572 in the library in their annotated form.
574 We do not provide yet a reverse translation from content level proofs to
575 partially specified terms. But in \texttt{cic\_disambiguation} we do provide
576 the reverse translation for formulae. The mapping from
577 content level formulae to partially specified terms is not unique due to
578 the ambiguity of the content level. As a consequence the translation
579 is guided by an \emph{interpretation}, that is a function that chooses for
580 every ambiguous formula one partially specified term. The
581 \texttt{cic\_disambiguation} \component{} implements the
582 disambiguation algorithm presented in~\cite{disambiguation} that is
583 responsible of building in an efficient way the set of all correct
584 interpretations. An interpretation is correct if the partially specified term
585 obtained using the interpretation is refinable.
587 In Sect.~\ref{sec:partiallyintro} we described the semantics of
589 function from status to status. We also hinted that the formulae in a
590 command are encoded as partially specified terms. However, consider the
591 command ``\texttt{replace} $x$ \texttt{with} $y^2$''. Until the occurrence
592 of $x$ to be replaced is located, its context is unknown. Since $y^2$ must
593 replace $x$ in that context, its encoding as a term cannot be computed
594 until $x$ is located. In other words, $y^2$ must be disambiguated in the
595 context of the occurrence $x$ it must replace.
597 The elegant solution we have implemented consists in representing terms
598 in a command as functions from a context to a partially refined term. The
599 function is obtained by partially applying our disambiguation function to
600 the content level term to be disambiguated. Our solution should be compared with
601 the one adopted in the \COQ{} system, where ambiguity is only relative to
603 In \COQ, variables can be bound either by name or by position. A term
604 occurring in a command has all its variables bound by name to avoid the need of
605 a context during disambiguation. This makes more complex every
606 operation over terms (i.e. according to our architecture every module that
607 depends on \texttt{cic}) since the code must deal consistently with both kinds
608 of binding. Moreover, this solution cannot cope with other forms of ambiguity
609 (as the context dependent meaning of the exponent in the previous example).
611 \subsection{Presentation level terms}
612 \label{sec:presentationintro}
614 Content level terms are a sort of abstract syntax trees for mathematical
615 formulae and proofs. The concrete syntax given to these abstract trees
616 is called \emph{presentation level}.
618 The main important difference between the content level language and the
619 presentation level language is that only the former is extendible. Indeed,
620 the presentation level language is a finite language that comprises all
621 the usual mathematical symbols. Mathematicians invent new notions every
622 single day, but they stick to a set of symbols that is more or less fixed.
624 The fact that the presentation language is finite allows the definition of
625 standard languages. In particular, for formulae we have adopt \MATHML{}
626 Presentation~\cite{mathml} that is an XML dialect standardized by the W3C. To
628 represent proofs it is enough to embed formulae in plain text enriched with
629 formatting boxes. Since the language of formatting boxes is very simple,
630 many equivalent specifications exist and we have adopted our own, called
633 The \texttt{content\_pres} \component{} contains the implementation of the
634 translation from content level terms to presentation level terms. The
635 rendering of presentation level terms is left to the application that uses
636 the \component. However, in the \texttt{hgdome} \component{} we provide a few
637 utility functions to build a \GDOME~\cite{gdome2} \MATHML+\BOXML{} tree from our
639 level terms. \GDOME{} \MATHML+\BOXML{} trees can be rendered by the
641 widget developed by Luca Padovani~\cite{padovani}. The widget is
642 particularly interesting since it allows the implementation of \emph{semantic
645 Semantic selection is a technique that consists in enriching the presentation
646 level terms with pointers to the content level terms and to the partially
647 specified terms they correspond to. Highlight of formulae in the widget is
648 constrained to selection of meaningful expressions, i.e. expressions that
649 correspond to a lower level term, that is a content term or a partially or
650 fully specified term.
651 Once the rendering of a lower level term is
652 selected it is possible for the application to retrieve the pointer to the
653 lower level term. An example of applications of semantic selection is
654 \emph{semantic copy \& paste}: the user can select an expression and paste it
655 elsewhere preserving its semantics (i.e. the partially specified term),
656 possibly performing some semantic transformation over it (e.g. renaming
657 variables that would be captured or lambda-lifting free variables).
659 The reverse translation from presentation level terms to content level terms
660 is implemented by a parser that is also found in \texttt{content\_pres}.
661 Differently from the translation from content level terms to partially
662 refined terms, this translation is not ambiguous. The reason is that the
663 parsing tool we have adopted (CamlP4) is not able to parse ambiguous
664 grammars. Thus we require the mapping from presentation level terms
665 (concrete syntax) to content level terms (abstract syntax) to be unique.
666 This means that the user must fix once and for all the associativity and
667 precedence level of every operator he is using. In practice this limitation
668 does not seem too strong. The reason is that the target of the
669 translation is an ambiguous language and the user is free to associate
670 to every content level term several different interpretations (as a
671 partially specified term).
673 Both the direct and reverse translation from presentation to content level
674 terms are parameterized over the user provided mathematical notation.
675 The \texttt{lexicon} \component{} is responsible of managing the lexicon,
676 that is the set of active notations. It defines an abstract syntax tree
677 of commands to declare and activate new notations and it implements the
678 semantics of these commands. It also implements undoing of the semantic
679 actions. Among the commands there are hints to the
680 disambiguation algorithm that are used to control and speed up disambiguation.
681 These mechanisms will be further discussed in Sect.~\ref{sec:disambiguation}.
683 Finally, the \texttt{grafite\_parser} \component{} implements a parser for
684 the concrete syntax of the commands of \MATITA. The parser process a stream
685 of characters and returns a stream of abstract syntax trees (the ones
686 defined by the \texttt{grafite} component and whose semantics is given
687 by \texttt{grafite\_engine}). When the parser meets a command that changes
688 the lexicon, it invokes the \texttt{lexicon} \component{} to immediately
689 process the command. When the parser needs to parse a term at the presentation
690 level, it invokes the already described parser for terms contained in
691 \texttt{content\_pres}.
693 The \MATITA{} proof assistant and the \WHELP{} search engine are both linked
694 against the \texttt{grafite\_parser} \components{}
695 since they provide an interface to the user. In both cases the formulae
696 written by the user are parsed using the \texttt{content\_pres} \component{} and
697 then disambiguated using the \texttt{cic\_disambiguation} \component. However,
698 only \MATITA{} is linked against the \texttt{grafite\_engine} and
699 \texttt{tactics} components (summing up to a total of 11'200 lines of code)
700 since \WHELP{} can only execute those ASTs that correspond to queries
701 (implemented in the \texttt{whelp} component).
703 The \UWOBO{} Web service wraps the \texttt{content\_pres} \component,
704 providing a rendering service for the documents in the distributed library.
705 To render a document given its URI, \UWOBO{} retrieves it using the
706 \GETTER{} obtaining a document with fully specified terms. Then it translates
707 it to the presentation level passing through the content level. Finally
708 it returns the result document to be rendered by the user's
711 The \components{} not yet described (\texttt{extlib}, \texttt{xml},
712 \texttt{logger}, \texttt{registry} and \texttt{utf8\_macros}) are
713 minor \components{} that provide a core of useful functions and basic
714 services missing from the standard library of the programming language.
715 %In particular, the \texttt{xml} \component{} is used to easily represent,
716 %parse and pretty-print XML files.
718 \section{The interface to the library}
721 A proof assistant provides both an interface to interact with its library and
722 an \emph{authoring} interface to develop new proofs and theories. According
723 to its historical origins, \MATITA{} strives to provide innovative
724 functionalities for the interaction with the library. It is more traditional
725 in its script based authoring interface. In the remaining part of the paper we
726 focus on the user view of \MATITA.
728 The library of \MATITA{} comprises mathematical concepts (theorems,
729 axioms, definitions) and notation. The concepts are authored sequentially
730 using scripts that are (ordered) sequences of procedural commands.
731 Once they are produced we store them independently in the library.
732 The only relation implicitly kept between the concepts are the logical,
733 acyclic dependencies among them. This way the library forms a global (and
734 distributed) hypertext.
738 \includegraphics[width=0.45\textwidth]{pics/cicbrowser-screenshot-browsing}
739 \hspace{0.05\textwidth}
740 \includegraphics[width=0.45\textwidth]{pics/cicbrowser-screenshot-query}
741 \caption{Browsing and searching the library\strut}
742 \label{fig:cicbrowser1}
748 \includegraphics[width=0.70\textwidth]{pics/cicbrowser-screenshot-con}
749 \caption[Natural language rendering]{Natural language rendering of a theorem
750 from the library\strut}
751 \label{fig:cicbrowser2}
755 Several useful operations can be implemented on the library only,
756 regardless of the scripts. For instance, searching and browsing is
757 implemented by the ``cicBrowser'' window available from the \MATITA{}
758 GUI. Using it, the hierarchical structure of the library can be
759 explored (on the left of Fig.~\ref{fig:cicbrowser1}), the natural
760 language rendering of proofs can be inspected
761 (Fig.~\ref{fig:cicbrowser2}), and content based searches on the
762 library can be performed (on the right of Fig.~\ref{fig:cicbrowser1}).
763 Content based searches are described in
764 Sect.~\ref{sec:indexing}. Other examples of library operations are
765 disambiguation of content level terms (see
766 Sect.~\ref{sec:disambiguation}) and automatic proof searching (see
767 Sect.~\ref{sec:automation}).
769 The key requisite for the previous operations is that the library must
770 be fully accessible and in a logically consistent state. To preserve
771 consistency, a concept cannot be altered or removed unless the part of the
772 library that depends on it is modified accordingly. To allow incremental
773 changes and cooperative development, consistent revisions are necessary.
774 For instance, to modify a definition, the user could fork a new version
775 of the library where the definition is updated and all the concepts that
776 used to rely on it are absent. The user is then responsible to restore
777 the removed part in the new branch, merging the branch when the library is
780 To implement the proposed versioning system on top of a standard one
781 it is necessary to implement \emph{invalidation} first. Invalidation
782 is the operation that locates and removes from the library all the concepts
783 that depend on a given one. As described in Sect.~\ref{sec:libmanagement} removing
784 a concept from the library also involves deleting its metadata from the
787 For non collaborative development, full versioning can be avoided, but
788 invalidation is still required. Since nobody else is relying on the
789 user development, the user is free to change and invalidate part of the library
790 without branching. Invalidation is still necessary to avoid using a
791 concept that is no longer valid.
792 So far, in \MATITA{} we address only this non collaborative scenario
793 (see Sect.~\ref{sec:libmanagement}). Collaborative development and versioning
794 is still under design.
796 Scripts are not seen as constituents of the library. They are not published
797 and indexed, so they cannot be searched or browsed using \HELM{} tools.
798 However, they play a central role for the maintenance of the library.
799 Indeed, once a concept is invalidated, the only way to restore it is to
800 fix the possibly broken script that used to generate it.
801 Moreover, during the authoring phase, scripts are a natural way to
802 group concepts together. They also constitute a less fine grained clustering
803 of concepts for invalidation.
805 In the rest of this section we present in more details the functionalities of
806 \MATITA{} related to library management and exploitation.
807 Sect.~\ref{sec:authoring} is devoted to the description of the peculiarities of
808 the \MATITA{} authoring interface.
810 \subsection{Indexing and searching}
813 The \MATITA{} system is first of all an interface between the user and
814 the mathematical library. For this reason, it is important to be
815 able to search and retrieve mathematical concepts in a quick and
816 effective way, assuming as little knowledge as possible about the
817 library. To this aim, \MATITA{} uses a sophisticated indexing mechanism
818 for mathematical concepts, based on a rich metadata set that has been
819 tuned along the European project \MOWGLIIST{} \MOWGLI. The metadata
820 set, and the searching facilites built on top of them --- collected
821 in the so called \WHELP{} search engine --- have been
822 extensively described in~\cite{whelp}. Let us just recall here that
823 the \WHELP{} metadata model is essentially based a single ternary relation
824 \REF{p}{s}{t} stating that a concept $s$ refers a concept $t$ at a
825 given position $p$, where the position specify the place of the
826 occurrence of $t$ inside $s$ (we currently work with a fixed set of
827 positions, discriminating the hypothesis from the conclusion and
828 outermost form innermost occurrences). This approach is extremely
829 flexible, since extending the set of positions
830 we may improve the granularity and the precision of our indexing technique,
831 with no additional architectural impact.
833 Every time a new mathematical concept is created and saved by the user it gets
834 indexed, and becomes immediately visible in the library. Several
835 interesting and innovative features of \MATITA{} described in the following
836 sections rely in a direct or indirect way on its metadata system and
837 the search features. Here, we shall just recall some of its most
840 A first, very simple but not negligeable feature is the \emph{duplicate check}.
841 As soon as a theorem is stated, just before starting its proof,
842 the library is searched
843 to check that no other equivalent statement has been already proved
844 (based on the pattern matching functionality of \WHELP); if this is the case,
845 a warning is raised to the user. At present, the notion of equivalence
846 adopted by \MATITA{} is convertibility, but we may imagine to weaken it
847 in the future, covering for instance isomorphisms.
849 Another useful \WHELP{} operation is \HINT; we may invoke this query
850 at any moment during the authoring of a proof, resulting in the list
851 of all theorems of the library which can be applied to the current
852 goal. In practice, this is mostly used not really to discover what theorems
853 can be applied to a given goal, but to actually retrieve a theorem that
854 we wish to apply, but whose name we have forgotten.
855 In fact, even if \MATITA{} adopts a semi-rigid naming convention for
856 statements (see Sect.~\ref{sec:naming}) that greatly simplifies the effort
857 of recalling names, the naming discipline remains one of the most
858 annoying aspects of formal developments, and \HINT{} provides
859 a very friendly solution.
861 In the near future, we expect to extend the \HINT{} query to
862 a \REWRITEHINT, resulting in all equational statements that
863 can be applied to rewrite the current goal.
865 \subsection{Disambiguation}
866 \label{sec:disambiguation}
868 Software applications that involve input of mathematical content should strive
869 to require the user as less drift from informal mathematics as possible. We
870 believe this to be a fundamental aspect of such applications user interfaces.
871 Being that drift in general very large when inputing
872 proofs~\cite{debrujinfactor}, in \MATITA{} we achieved good results for
873 mathematical formulae which can be input using a \TeX-like encoding (the
874 concrete syntax corresponding to presentation level terms) and are then
875 translated (in multiple steps) to partially specified terms as sketched in
876 Sect.~\ref{sec:contentintro}.
878 The key ingredient of the translation is the generic disambiguation algorithm
879 implemented in the \texttt{disambiguation} component of Fig.~\ref{fig:libraries}
880 and presented in~\cite{disambiguation}. In this section we detail how to use
881 that algorithm in the context of the development of a library of formalized
882 mathematics. We will see that using multiple passes of the algorithm, varying
883 some of its parameters, helps in keeping the input terse without sacrificing
886 \subsubsection{Disambiguation aliases}
887 \label{sec:disambaliases}
889 Consider the following command that states a theorem over integer numbers:
893 \forall x, y, z. x < y \to y < z \to x < z.
896 The symbol \OP{<} is likely to be overloaded in the library
897 (at least over natural numbers).
898 Thus, according to the disambiguation algorithm, two different
899 refinable partially specified terms could be associated to it.
900 \MATITA{} asks the user what interpretation he meant. However, to avoid
901 posing the same question in case of a future re-execution (e.g. undo/redo),
902 the choice must be recorded. Since scripts need to be re-executed after
903 invalidation, the choice record must be permanently stored somewhere. The most
904 natural place is the script itself.
906 In \MATITA{} disambiguation is governed by \emph{disambiguation aliases}.
907 They are mappings, stored in the library, from ambiguity sources
908 (identifiers, symbols and literal numbers at the content level) to partially
909 specified terms. In case of overloaded sources there exists multiple aliases
910 with the same source. It is possible to record \emph{disambiguation
911 preferences} to select one of the aliases of an overloaded source.
913 Preferences can be explicitely given in the script (using the
914 misleading \texttt{alias} commands), but
915 are also implicitly added when a new concept is introduced (\emph{implicit
916 preferences}) or after a sucessfull disambiguation that did not require
917 user interaction. Explicit preferences are added automatically by \MATITA{} to
918 record the disambiguation choices of the user. For instance, after the
919 disambiguation of the command above, the script is altered as follows:
922 alias symbol "lt" = "integer 'less than'".
924 \forall x, y, z. x < y \to y < z \to x < z.
927 The ``alias'' command in the example sets the preferred alias for the
930 Implicit preferences for new concepts are set since a concept just defined is
931 likely to be the preferred one in the rest of the script. Implicit preferences
932 learned from disambiguation of previous commands grant the coherence of
933 the disambiguation in the rest of the script and speed up disambiguation
934 reducing the search space.
936 Disambiguation preferences are included in the lexicon status
937 (see Sect.~\ref{sec:presentationintro}) that is part of the authoring interface
938 status. Unlike aliases, they are not part of the library.
940 When starting a new authoring session the set of disambiguation preferences
941 is empty. Until it contains a preference for each overloaded symbol to be
942 used in the script, the user can be faced with questions from the disambiguator.
943 To reduce the likelyhood of user interactions, we introduced
944 the \texttt{include} command. With \texttt{include} it is possible to import
945 at once in the current session the set of preferences that was in effect
946 at the end of the execution of a given script.
948 Preferences can be changed. For instance, at the start of the development
949 of integer numbers the preference for the symbol \OP{<} is likely
950 to be the one over natural numbers; sooner or later it will be set to the one
951 over integer numbers.
953 Nothing forbids the set of preferences to become incoherent. For this reason
954 the disambiguator cannot always respect the user preferences.
955 Consider, for example:
958 \forall x, y, k. x < y \to x < y + k.
961 No refinable partially specified term corresponds to the preferences:
962 \OP{+} over natural numbers, \OP{<} over integer numbers. To overcome this
963 limitation we organized disambiguation in \emph{multiple passes}: when the
964 disambiguator fails, disambiguation is tried again with a less strict set of
967 Several disambiguation parameters can vary among passes. With respect to
968 preference handling we implemented three passes. In the first pass, called
969 \emph{mono-preferences}, we consider only the aliases corresponding to the
970 current set of preferences. In the second pass, called
971 \emph{multi-preferences}, we
972 consider every alias corresponding to a current or past preference. For
973 instance, in the example above disambiguation succeeds in the multi-preference
974 pass. In the third pass, called \emph{library-preferences}, all aliases
975 available in the library are considered.
977 The rationale behind this choice is trying to respect user preferences in early
978 passes that complete quickly in case of failure; later passes are slower but
979 have more chances of success.
981 \subsubsection{Operator instances}
982 \label{sec:disambinstances}
984 Consider now the following theorem:
986 theorem lt_to_Zlt_pos_pos:
987 \forall n, m: nat. n < m \to pos n < pos m.
989 and assume that there exist in the library aliases for \OP{<} over natural
990 numbers and over integer numbers. None of the passes described above is able to
991 disambiguate \texttt{lt\_to\_Zlt\_pos\_pos}, no matter how preferences are set.
992 This is because the \OP{<} operator occurs twice in the content level term (it
993 has two \emph{instances}) and two different interpretations for it have to be
994 used in order to obtain a refinable partially specified term.
996 To address this issue, we have the ability to consider each instance of a single
997 symbol as a different ambiguous expression in the content level term,
998 enabling the use of a different alias for each of them.
999 Exploiting or not this feature is
1000 one of the disambiguation pass parameters. A disambiguation pass which exploit
1001 it is said to be using \emph{fresh instances} (opposed to a \emph{shared
1004 Fresh instances lead to a non negligible performance loss (since the choice of
1005 an alias for one instance does not constraint the choice of the others). For
1006 this reason we always attempt a fresh instances pass only after attempting a
1007 shared instances pass.
1009 \paragraph{One-shot preferences} Disambiguation preferences as seen so far are
1010 instance-independent. However, implicit preferences obtained as a result of a
1011 disambiguation pass which uses fresh instances ought to be instance-dependent.
1012 Informally, the set of preferences that can be respected by the disambiguator on
1013 the theorem above is: ``the first instance of the \OP{<} symbol is over natural
1014 numbers, while the second is on integer numbers''.
1016 Instance-dependent preferences are meaningful only for the term whose
1017 disambiguation generated it. For this reason we call them \emph{one-shot
1018 preferences} and \MATITA{} does not use them to disambiguate further terms in
1021 \subsubsection{Implicit coercions}
1022 \label{sec:disambcoercions}
1024 Consider the following theorem about derivation:
1026 theorem power_deriv:
1027 \forall n: nat, x: R. d x ^ n dx = n * x ^ (n - 1).
1029 and assume that in the library there is an alias mapping \OP{\^} to a partially
1030 specified term having type: \texttt{R \TEXMACRO{to} nat \TEXMACRO{to} R}. In
1031 order to disambiguate \texttt{power\_deriv}, the occurrence of \texttt{n} on the
1032 right hand side of the equality need to be ``injected'' from \texttt{nat} to
1033 \texttt{R}. The refiner of \MATITA{} supports
1034 \emph{implicit coercions}~\cite{barthe95implicit} for
1035 this reason: given as input the above presentation level term, it will return a
1036 partially specified term where in place of \texttt{n} the application of a
1037 coercion from \texttt{nat} to \texttt{R} appears (assuming such a coercion has
1038 been defined in advance).
1040 Implicitc coercions are not always desirable. For example, in disambiguating
1041 \texttt{\TEXMACRO{forall} x: nat. n < n + 1} we do not want the term which uses
1042 2 coercions from \texttt{nat} to \texttt{R} around \OP{<} arguments to show up
1043 among the possible partially specified term choices. For this reason we always
1044 attempt a disambiguation pass which require the refiner not to use the coercions
1045 before attempting a coercion-enabled pass.
1047 The choice of whether implicit coercions are enabled or not interact with the
1048 choice about operator instances. Indeed, consider again
1049 \texttt{lt\_to\_Zlt\_pos\_pos}, which can be disambiguated using fresh operator
1050 instances. In case there exists a coercion from natural numbers to (positive)
1051 integers (which indeed does), the
1052 theorem can be disambiguated using twice that coercion on the left hand side of
1053 the implication. The obtained partially specified term however would not
1054 probably be the expected one, being a theorem which proves a trivial
1056 Motivated by this and similar examples we choose to always prefer fresh
1057 instances over implicit coercions, i.e. we always attempt disambiguation
1058 passes with fresh instances
1059 and no implicit coercions before attempting passes with implicit coercions.
1061 \subsubsection{Disambiguation passes}
1062 \label{sec:disambpasses}
1064 According to the criteria described above, in \MATITA{} we perform the
1065 disambiguation passes depicted in Tab.~\ref{tab:disambpasses}. In
1066 our experience that choice gives reasonable performance and minimizes the need
1067 of user interaction during the disambiguation.
1070 \caption{Disambiguation passes sequence\strut}
1071 \label{tab:disambpasses}
1073 \begin{tabular}{c|c|c|c}
1074 \multicolumn{1}{p{1.5cm}|}{\centering\raisebox{-1.5ex}{\textbf{Pass}}}
1075 & \multicolumn{1}{p{3.1cm}|}{\centering\textbf{Preferences}}
1076 & \multicolumn{1}{p{2.5cm}|}{\centering\textbf{Operator instances}}
1077 & \multicolumn{1}{p{2.5cm}}{\centering\textbf{Implicit coercions}} \\
1079 \PASS & Mono-preferences & Shared instances & Disabled \\
1080 \PASS & Multi-preferences & Shared instances & Disabled \\
1081 \PASS & Mono-preferences & Fresh instances & Disabled \\
1082 \PASS & Multi-preferences & Fresh instances & Disabled \\
1083 \PASS & Mono-preferences & Fresh instances & Enabled \\
1084 \PASS & Multi-preferences & Fresh instances & Enabled \\
1085 \PASS & Library-preferences & Fresh instances & Enabled
1090 \subsection{Generation and invalidation}
1091 \label{sec:libmanagement}
1093 %The aim of this section is to describe the way \MATITA{}
1094 %preserves the consistency and the availability of the library
1095 %using the \WHELP{} technology, in response to the user alteration or
1096 %removal of mathematical objects.
1098 %As already sketched in Sect.~\ref{sec:fullyintro} what we generate
1099 %from a script is split among two storage media, a
1100 %classical filesystem and a relational database. The former is used to
1101 %store the XML encoding of the objects defined in the script, the
1102 %disambiguation aliases and the interpretation and notational convention defined,
1103 %while the latter is used to store all the metadata needed by
1106 %While the consistency of the data store in the two media has
1107 %nothing to do with the nature of
1108 %the content of the library and is thus uninteresting (but really
1109 %tedious to implement and keep bug-free), there is a deeper
1110 %notion of mathematical consistency we need to provide. Each object
1111 %must reference only defined object (i.e. each proof must use only
1112 %already proved theorems).
1114 In this section we will focus on how \MATITA{} ensures the library
1115 consistency during the formalization of a mathematical theory,
1116 giving the user the freedom of adding, removing, modifying objects
1117 without loosing the feeling of an always visible and browsable
1120 \subsubsection{Invalidation}
1122 Invalidation (see Sect.~\ref{sec:library}) is implemented in two phases.
1124 The first one is the calculation of all the concepts that recursively
1125 depend on the ones we are invalidating. It can be performed
1126 using the relational database that stores the metadata.
1127 This technique is the same used by the \emph{Dependency Analyzer}
1128 and is described in~\cite{zack-master}.
1130 The second phase is the removal of all the results of the generation,
1133 \subsubsection{Regeneration}
1135 %The typechecker component guarantees that if an object is well typed
1136 %it depends only on well typed objects available in the library,
1137 %that is exactly what we need to be sure that the logic consistency of
1138 %the library is preserved.
1140 To regenerate an invalidated part of the library \MATITA{} re-executes
1141 the scripts that produced the invalidated concepts. The main
1142 problem is to find a suitable order of execution of the scripts.
1144 For this purpose we provide a tool called \MATITADEP{}
1145 that takes in input the list of scripts that compose the development and
1146 outputs their dependencies in a format suitable for the GNU \texttt{make}
1147 tool.\footnote{\url{http://www.gnu.org/software/make/}}
1148 The user is not asked to run \MATITADEP{} by hand, but
1149 simply to tell \MATITA{} the root directory of his development (where all
1150 script files can be found) and \MATITA{} will handle all the generation
1151 related tasks, including dependencies calculation.
1153 To compute dependencies it is enough to look at the script files for
1154 literal of included explicit disambiguation preferences
1155 (see Sect.~\ref{sec:disambaliases}).
1157 The re-execution of a script to regenerate part of the library
1158 requires the preliminary invalidation of the concepts generated by the
1161 \subsubsection{Batch vs Interactive}
1163 \MATITA{} includes an interactive authoring interface and a batch
1164 ``compiler'' (\MATITAC).
1166 Only the former is intended to be used directly by the
1167 user, the latter is automatically invoked by \MATITA{}
1168 to regenerate parts of the library previously invalidated.
1170 While they share the same engine for generation and invalidation, they
1171 provide different granularity. \MATITAC{} is only able to re-execute a
1172 whole script and similarly to invalidate all the concepts generated
1173 by a script (together with all the other scripts that rely on a concept defined
1176 \subsection{Automation}
1177 \label{sec:automation}
1179 In the long run, one would expect to work with a proof assistant
1180 like \MATITA, using only three basic tactics: \TAC{intro}, \TAC{elim},
1182 (possibly integrated by a moderate use of \TAC{cut}). The state of the art
1183 in automated deduction is still far away from this goal, but
1184 this is one of the main development direction of \MATITA.
1186 Even in this field, the underlying philosophy of \MATITA{} is to
1187 free the user from any burden relative to the overall management
1188 of the library. For instance, in \COQ, the user is responsible to
1189 define small collections of theorems to be used as a parameter
1190 by the \TAC{auto} tactic;
1191 in \MATITA, it is the system itself that automatically retrieves, from
1192 the whole library, a subset of theorems worth to be considered
1193 according to the signature of the current goal and context.
1195 The basic tactic merely iterates the use of the \TAC{apply} tactic
1196 (with no \TAC{intro}). The search tree may be pruned according to two
1197 main parameters: the \emph{depth} (whit the obvious meaning), and the
1198 \emph{width} that is the maximum number of (new) open goals allowed at
1199 any instant. \MATITA{} has only one notion of metavariable, corresponding
1200 to the so called existential variables of Coq; so, \MATITA's \TAC{auto}
1201 tactic should be compared with \COQ's \TAC{EAuto} tactic.
1203 Recently we have extended automation with paramodulation based
1204 techniques. At present, the system works reasonably well with
1205 equational rewriting, where the notion of equality is parametric
1206 and can be specified by the user: the system only requires
1207 a proof of {\em reflexivity} and {\em paramodulation} (or rewriting,
1208 as it is usually called in the proof assistant community).
1210 Given an equational goal, \MATITA{} recovers all known equational facts
1211 from the library (and the local context), applying a variant of
1212 the so called {\em given-clause algorithm}~\cite{paramodulation},
1213 that is the the procedure currently used by the majority of modern
1214 automatic theorem provers.
1216 The given-clause algorithm is essentially composed by an alternation
1217 of a \emph{saturation} phase and a \emph{demodulation} phase.
1218 The former derives new facts by a set of active
1219 facts and a new \emph{given} clause suitably selected from a set of passive
1220 equations. The latter tries to simplify the equations
1221 orienting them according to a suitable weight associated to terms.
1222 \MATITA{} currently supports several different weigthing functions
1223 comprising Knuth-Bendix ordering (kbo) and recursive path ordering (rpo),
1224 that integrates particularly well with normalization.
1226 Demodulation alone is already a quite powerful technique, and
1227 it has been turned into a tactic by itself: the \TAC{demodulate}
1228 tactic, which can be seen as a kind of generalization of \TAC{simplify}.
1229 The following portion of script describes two
1230 interesting cases of application of this tactic (both of them relying
1231 on elementary arithmetic equations):
1235 \forall x: nat. (x+1)*(x-1) = x*x - 1.
1238 [ simplify; reflexivity
1239 | intro; demodulate; reflexivity ]
1245 \forall x, y: nat. (x+y)*(x+y) = x*x + 2*x*y + y*y.
1246 intros; demodulate; reflexivity
1250 In the future we expect to integrate applicative and equational
1251 rewriting. In particular, the overall idea would be to integrate
1252 applicative rewriting with demodulation, treating saturation as an
1253 operation to be performed in batch mode, e.g. during the night.
1255 \subsection{Naming convention}
1258 A minor but not entirely negligible aspect of \MATITA{} is that of
1259 adopting a (semi)-rigid naming convention for concept names, derived by
1260 our studies about metadata for statements.
1261 The convention is only applied to theorems
1262 (not definitions), and relates theorem names to their statements.
1263 The basic rules are the following:
1266 \item each name is composed by an ordered list of (short)
1267 identifiers occurring in a left to right traversal of the statement;
1269 \item all names should (but this is not strictly compulsory)
1270 separated by an underscore;
1272 \item names occurring in two different hypotheses, or in an hypothesis
1273 and in the conclusion must be separated by the string \texttt{\_to\_};
1275 \item the identifier may be followed by a numerical suffix, or a
1276 single or double apostrophe.
1280 Take for instance the statement:
1282 \forall n: nat. n = plus n O
1284 Possible legal names are: \texttt{plus\_n\_O}, \texttt{plus\_O},
1285 \texttt{eq\_n\_plus\_n\_O} and so on.
1287 Similarly, consider the statement
1289 \forall n, m: nat. n < m to n \leq m
1291 In this case \texttt{lt\_to\_le} is a legal name,
1292 while \texttt{lt\_le} is not.
1294 But what about, say, the symmetric law of equality? Probably you would like
1295 to name such a theorem with something explicitly recalling symmetry.
1296 The correct approach,
1297 in this case, is the following. You should start with defining the
1298 symmetric property for relations:
1300 definition symmetric \def
1301 \lambda A: Type. \lambda R. \forall x, y: A.
1304 Then, you may state the symmetry of equality as:
1306 \forall A: Type. symmetric A (eq A)
1308 and \texttt{symmetric\_eq} is a legal name for such a theorem.
1310 So, somehow unexpectedly, the introduction of semi-rigid naming convention
1311 has an important beneficial effect on the global organization of the library,
1312 forcing the user to define abstract concepts and properties before
1313 using them (and formalizing such use).
1315 Two cases have a special treatment. The first one concerns theorems whose
1316 conclusion is a (universally quantified) predicate variable, i.e.
1317 theorems of the shape
1318 $\forall P,\dots,.P(t)$.
1319 In this case you may replace the conclusion with the string
1320 \texttt{elim} or \texttt{case}.
1321 For instance the name \texttt{nat\_elim2} is a legal name for the double
1322 induction principle.
1324 The other special case is that of statements whose conclusion is a
1326 A typical example is the following:
1329 match (eqb n m) with
1330 [ true \Rightarrow n = m
1331 | false \Rightarrow n \neq m]
1333 where \texttt{eqb} is boolean equality.
1334 In this cases, the name can be build starting from the matched
1335 expression and the suffix \texttt{\_to\_Prop}. In the above example,
1336 \texttt{eqb\_to\_Prop} is accepted.
1338 \section{The authoring interface}
1339 \label{sec:authoring}
1341 The authoring interface of \MATITA{} is very similar to Proof
1342 General~\cite{proofgeneral}. We
1343 chose not to build the \MATITA{} UI over Proof General for two reasons. First
1344 of all we wanted to integrate our XML-based rendering technologies, mainly
1345 \GTKMATHVIEW, in the UI.
1346 At the time of writing Proof General supports only text based
1347 rendering.\footnote{This may change with future releases of Proof General
1348 based on Eclipse (\url{http://www.eclipse.org/}), but is not yet the
1349 case.} The second reason is that we wanted
1350 to build the \MATITA{} UI on top of a state-of-the-art and widespread graphical
1351 toolkit as \GTK{} is.
1353 Fig.~\ref{fig:screenshot} is a screenshot of the \MATITA{} authoring interface,
1354 featuring two windows. The background one is very like to the Proof General
1355 interface. The main difference is that we use the \GTKMATHVIEW{} widget to
1356 render sequents. Since \GTKMATHVIEW{} renders \MATHML{} markup we take
1357 advantage of the whole bidimensional mathematical notation. The foreground
1358 window is an instance of the cicBrowser (see Sect.~\ref{sec:library}) used to
1359 render in natural language the proof being developed.
1361 Note that the syntax used in the script view is \TeX-like, but
1362 Unicode\footnote{\url{http://www.unicode.org/}} is
1363 also fully supported so that mathematical glyphs can be input as such.
1367 \includegraphics[width=0.95\textwidth]{pics/matita-screenshot}
1368 \caption{Authoring interface\strut}
1369 \label{fig:screenshot}
1373 Since the concepts of script based proof authoring are well-known, the
1374 remaining part of this section is dedicated to the distinguishing
1375 features of the \MATITA{} authoring interface.
1377 \subsection{Direct manipulation of terms}
1378 \label{sec:directmanip}
1380 While terms are input as \TeX-like formulae in \MATITA, they are converted to a
1381 mixed \MATHML+\BOXML{} markup for output purposes and then rendered by
1382 \GTKMATHVIEW. As described in~\cite{latexmathml} this mixed choice enables both
1383 high-quality bidimensional rendering of terms (including the use of fancy
1384 layout schemata like radicals and matrices) and the use of a
1385 concise and widespread textual syntax.
1387 Keeping pointers from the presentations level terms down to the
1388 partially specified ones, \MATITA{} enables direct manipulation of
1389 rendered (sub)terms in the form of hyperlinks and semantic selection.
1391 \emph{Hyperlinks} have anchors on the occurrences of constant and
1392 inductive type constructors and point to the corresponding definitions
1393 in the library. Anchors are available notwithstanding the use of
1394 user-defined mathematical notation: as can be seen on the right of
1395 Fig.~\ref{fig:directmanip}, where we clicked on $\nmid$, symbols
1396 encoding complex notations retain all the hyperlinks of constants or
1397 constructors used in the notation.
1399 \emph{Semantic selection} enables the selection of mixed
1400 \MATHML+\BOXML{} markup, constraining the selection to markup
1401 representing meaningful CIC (sub)terms. In the example on the left of
1402 Fig.~\ref{fig:directmanip} is thus possible to select the subterm
1403 $\mathrm{prime}~n$, whereas it would not be possible to select
1404 $\to n$ since the former denotes an application while the
1405 latter is not a subterm. Once a meaningful (sub)term has been
1406 selected actions like reductions or tactic applications can be performed on it.
1410 \includegraphics[width=0.40\textwidth]{pics/matita-screenshot-selection}
1411 \hspace{0.05\textwidth}
1412 \raisebox{0.4cm}{\includegraphics[width=0.50\textwidth]{pics/matita-screenshot-href}}
1413 \caption[Semantic selection and hyperlinks]{Semantic selection (on the left)
1414 and hyperlinks (on the right)\strut}
1415 \label{fig:directmanip}
1419 \subsection{Patterns}
1420 \label{sec:patterns}
1422 In several situations working with direct manipulation of terms is
1423 simpler and faster than typing the corresponding textual
1424 commands~\cite{proof-by-pointing}.
1425 Nonetheless we need to record actions and selections in scripts.
1427 In \MATITA{} \emph{patterns} are textual representations of selections.
1428 Users can select using the GUI and then ask the system to paste the
1429 corresponding pattern in this script. More often this process is
1430 transparent to the user: once an action is performed on a selection,
1432 textual command is computed and inserted in the script.
1434 \subsubsection{Pattern syntax}
1436 Patterns are composed of two parts: \NT{sequent\_path} and
1437 \NT{wanted}; their concrete syntax is reported in Tab.~\ref{tab:pathsyn}.
1439 \NT{sequent\_path} mocks-up a sequent, discharging unwanted subterms
1440 with \OP{?} and selecting the interesting parts with the placeholder
1441 \OP{\%}. \NT{wanted} is a term that lives in the context of the
1444 Textual patterns produced from a graphical selection are made of the
1445 \NT{sequent\_path} only. Such patterns can represent every selection,
1446 but are quite verbose. The \NT{wanted} part of the syntax is meant to
1447 help the users in writing concise and elegant patterns by hand.
1450 \caption{Patterns concrete syntax\strut}
1454 \begin{array}{@{}rcll@{}}
1456 ::= & [~\verb+in+~\NT{sequent\_path}~]~[~\verb+match+~\NT{wanted}~] & \\
1457 \NT{sequent\_path} &
1458 ::= & \{~\NT{ident}~[~\verb+:+~\NT{multipath}~]~\}~
1459 [~\verb+\vdash+~\NT{multipath}~] & \\
1460 \NT{multipath} & ::= & \NT{term\_with\_placeholders} & \\
1461 \NT{wanted} & ::= & \NT{term} & \\
1467 \subsubsection{Pattern evaluation}
1469 Patterns are evaluated in two phases. The first selects roots
1470 (subterms) of the sequent, using the \NT{sequent\_path}, while the
1471 second searches the \NT{wanted} term starting from that roots.
1472 % Both are optional steps, and by convention the empty pattern selects
1473 % the whole conclusion.
1477 concerns only \NT{sequent\_path}. \NT{ident} is an hypothesis name and selects
1478 the assumption where the following optional \NT{multipath} will operate.
1479 \verb+\vdash+ can be considered the name for the goal. If the whole pattern
1480 is omitted, the whole goal will be selected. If one or more hypothesis names
1481 are given, the selection is restricted to that assumptions. If a
1482 $\NT{multipath}$ is omitted the whole assumption is selected. Remember that
1483 the user can be mostly unaware of patterns concrete syntax, since the system
1484 is able to write down a \NT{sequent\_path} starting from a graphical
1487 A \NT{multipath} is a CIC term in which a special constant \OP{\%} is allowed.
1488 The roots of discharged subterms are marked with \OP{?}, while \OP{\%} is used
1489 to select roots. The default \NT{multipath}, the one that selects the whole
1490 term, is simply \OP{\%}. Valid \NT{multipath} are, for example, \texttt{(? \%
1491 ?)} or \texttt{\% \TEXMACRO{to} (\% ?)} that respectively select the first
1492 argument of an application or the source of an arrow and the head of the
1493 application that is found in the arrow target.
1495 This phase not only selects terms (roots of subterms) but determines also
1496 their context that will be possibly used in the next phase.
1499 plays a role only if \NT{wanted} is specified. From the first phase we
1500 have some terms, that we will use as roots, and their context.
1501 For each of these contexts the \NT{wanted} term is disambiguated in it
1502 and the corresponding root is searched for a subterm that can be unified to
1503 \NT{wanted}. The result of this search is the selection the
1508 \subsubsection{Examples}
1510 Consider the following sequent:
1511 \sequent{n: nat\\m: nat\\H: m + n = n}{m = O}
1513 To change the right part of the equality of the $H$
1514 hypothesis with $O + n$, the user selects and pastes it as the pattern
1515 in the following statement.
1517 change in H:(? ? ? %) with (O + n).
1520 To understand the pattern (or produce it by hand) the user should be aware that
1521 the notation $m + n = n$ hides the term $\mathrm{eq}~\mathrm{nat}~(m + n)~n$, so
1522 that the pattern selects only the third argument of $\mathrm{eq}$.
1524 The experienced user may also write by hand a concise pattern to change at once
1525 all the occurrences of $n$ in the hypothesis $H$:
1527 change in H match n with (O + n).
1530 In this case the \NT{sequent\_path} selects the whole $H$, while
1531 the second phase locates $n$.
1533 The latter pattern is equivalent to the following one, that the system
1534 can automatically generate from the selection.
1536 change in H:(? ? (? ? %) %) with (O + n).
1539 \subsubsection{Comparison with \COQ{}}
1541 In \MATITA{} all the tactics that act on subterms of the current sequent
1542 accept pattern arguments. Additional arguments can be disambiguated in the
1543 contexts of the terms selected by the pattern
1544 (\emph{context-dependent arguments}).
1546 %\NOTE{attualmente rewrite e fold non supportano fase 2. per
1547 %supportarlo bisogna far loro trasformare il pattern phase1+phase2
1548 %in un pattern phase1only come faccio nell'ultimo esempio. lo si fa
1549 %con una pattern\_of(select(pattern))}
1551 \COQ{} has two different ways of restricting the application of tactics to
1552 subterms of the current sequent, both relying on the same special syntax to
1553 identify a term occurrence.
1555 The first way is to use this special syntax to tell the
1556 tactic what occurrences of a wanted term should be affected.
1557 The second is to prepare the sequent with another tactic called
1558 \TAC{pattern} and then apply the real tactic. Note that the choice is not
1559 left to the user, since some tactics needs the sequent to be prepared
1560 with pattern and do not accept directly this special syntax.
1562 The idea is that to identify a subterm of the sequent we can
1563 write it and say that we want, for example, its third and fifth
1564 occurrences (counting from left to right). In our previous example,
1565 to change only the left part of the equivalence, the correct command
1568 change n at 2 in H with (O + n)
1570 meaning that in the hypothesis \texttt{H} the \texttt{n} we want to change is
1571 the second we encounter proceeding from left to right.
1573 The tactic \TAC{pattern} computes a
1574 $\beta$-expansion of a part of the sequent with respect to some
1575 occurrences of the given term. In the previous example the following
1580 would have resulted in this sequent:
1581 \sequent{n: nat\\m : nat\\H: (fun n0: nat => m + n = n0) n}{m = 0}
1582 where \texttt{H} is $\beta$-expanded over the second \texttt{n}
1585 At this point, since \COQ{} unification algorithm is essentially first-order,
1586 the application of an elimination principle (of the form $\forall P.\forall
1587 x.(H~x)\to (P~x)$) will unify \texttt{x} with \texttt{n} and \texttt{P} with
1588 \texttt{(fun n0: nat => m + n = n0)}.
1590 Since \TAC{rewrite}, \TAC{replace} and several other tactics boils down to
1591 the application of the equality elimination principle, the previous
1592 trick implements the expected behaviour.
1594 The idea behind this way of identifying subterms in not really far
1595 from the idea behind patterns, but fails in extending to
1596 complex notation, since it relies on a mono-dimensional sequent representation.
1597 Real math notation places arguments upside-down (like in indexed sums or
1598 integrations) or even puts them inside a bidimensional matrix.
1599 In these cases using the mouse to select the wanted term is probably the
1600 more effective way to tell the system what to do.
1602 One of the goals of \MATITA{} is to use modern publishing techniques, and
1603 adopting a method for restricting tactics application domain that discourages
1604 using heavy math notation would have definitively been a bad choice.
1606 In \MATITA{}, tactics accepting pattern arguments can be more expressive than
1607 the equivalent tactics in \COQ{} since variables bound in the pattern context,
1608 can occurr in context-dependent arguments. For example, consider the sequent:
1609 \sequent{n: nat\\x: nat\\H: \forall m. n + m*n = x + m}{m = O}
1610 In \MATITA{} the user can issue the command:
1612 change in H: \forall _. (? ? % ?) with (S m) * n.
1614 to change $n+m*n$ with $(S~m)*n$. To achieve the same effect in \COQ, the
1615 user is obliged to change the whole hypothesis rewriting its right hand side
1618 \subsection{Tacticals}
1619 \label{sec:tinycals}
1621 The procedural proof language implemented in \MATITA{} is pretty standard,
1622 with a notable exception for tacticals.
1624 Tacticals first appeared in LCF~\cite{lcf} as higher order tactics.
1625 They can be seen as control flow constructs like looping, branching,
1626 error recovery and sequential composition.
1628 The following simple example shows a \COQ{} script made of four dot-terminated
1633 A = B -> ((A -> B) /\ (B -> A)).
1636 [ rewrite < H; assumption
1637 | rewrite > H; assumption
1642 The third command is an application of the sequencing tactical \OP{$\ldots$~;~$\ldots$},
1643 that combines the tactic \TAC{split} with the application of the branching
1644 tactical \OP{$\ldots$~;[~$\ldots$~|~$\ldots$~|~$\ldots$~]} to other tactics or tacticals.
1646 The usual implementation of tacticals executes them atomically as any
1647 other command. In \MATITA{} this is not the case: each punctuation
1648 symbol is executed as a single command.
1650 \subsubsection{Common issues of tacticals}
1651 We will examine the two main problems of procedural proof languages:
1652 maintainability and readability.
1654 Tacticals are not only used to make scripts shorter by factoring out
1655 common cases and repeating commands. They are the primary way of making
1656 scripts more maintainable. They also have the well-known duty of
1657 structuring the proof using the branching tactical.
1659 However, authoring a proof structured with tacticals is annoying.
1660 Consider for example a proof by induction, and imagine you
1661 are using one of the state of the art graphical interfaces for proof assistant:
1662 Proof General. After applying the induction principle you have to choose:
1663 immediately structure the proof or postpone the structuring.
1664 If you decide for the former you have to apply the branching tactical and write
1665 at once tactics for all the cases. Since the user does not even know the
1666 generated goals yet, he can only replace all the cases with the identity
1667 tactic and execute the command, just to receive feedback on the first
1668 goal. Then he has to go one step back to replace the first identity
1669 tactic with the wanted one and repeat the process until all the
1670 branches are closed.
1672 One could imagine that a structured script is simpler to understand.
1673 This is not the case.
1674 A proof script, being not declarative, is not meant to be read.
1675 However, the user has the need of explaining it to others.
1676 This is achieved by interactively re-playing the script to show each
1677 intermediate proof status. Tacticals make this operation uncomfortable.
1678 Indeed, a tactical is executed atomically, while it is obvious that it
1679 performs lot of smaller steps we are interested in.
1680 To show the intermediate steps, the proof must be de-structured on the
1681 fly, for example replacing \OP{;} with \OP{.} where possible.
1683 \MATITA{} has a peculiar tacticals implementation that provides the
1684 same benefits as classical tacticals, while not burdening the user
1685 during proof authoring and re-playing.
1687 \subsubsection{The \MATITA{} approach}
1690 \caption{Concrete syntax of tacticals\strut}
1694 \begin{array}{@{}rcll@{}}
1696 ::= & \SEMICOLON \quad|\quad \DOT \quad|\quad \SHIFT \quad|\quad \BRANCH \quad|\quad \MERGE \quad|\quad \POS{\mathrm{NUMBER}~} & \\
1698 ::= & \verb+focus+ ~|~ \verb+try+ ~|~ \verb+solve+ ~|~ \verb+first+ ~|~ \verb+repeat+ ~|~ \verb+do+~\mathrm{NUMBER} & \\
1700 ::= & \verb+begin+ ~|~ \verb+end+ & \\
1702 ::= & \verb+skip+ ~|~ \NT{tactic} ~|~ \NT{block\_delim} ~|~ \NT{block\_kind} ~|~ \NT{punctuation} \\
1708 \MATITA{} tacticals syntax is reported in Tab.~\ref{tab:tacsyn}.
1709 LCF tacticals have been replaced by unstructured more primitive commands;
1710 every LCF tactical is semantically equivalent to a sequential composition of
1711 them. As usual, each command is executed atomically, so that a sequence
1712 corresponding to an LCF tactical is now executed in multiple steps.
1714 For instance, reconsider the previous example of a proof by induction.
1715 In \MATITA{} the user can apply the induction principle, and just
1716 open the branching punctuation symbol \OP{[}. Then he can interact with the
1717 system (applying tactics and so forth) until he decides to move to the
1718 next branch using \OP{|}. After the last branch, the punctuation symbol
1719 \OP{]} must be used to collect goals possibly left open, accordingly to
1720 the semantics of the LCF branching tactical \OP{$\ldots$~;[~$\ldots$~|~$\ldots$~|~$\ldots$~]}. The result effortlessly obtained is a structured script.
1722 The user is not forced to fully structure his script. If he wants, he
1723 can even write completely unstructured proofs using only the \OP{.}
1726 Re-playing a proof is also straightforward since there is no longer any need
1727 to manually destructure the proof.
1729 \section{Standard library}
1732 \MATITA{} is \COQ{} compatible, in the sense that every theorem of \COQ{} can be
1733 read, checked and referenced in further developments. However, in order to test
1734 the actual usability of the system, a new library of results has been started
1735 from scratch. In this case, of course, we wrote (and offer) the source scripts,
1736 while in the case of \COQ{} \MATITA{} may only rely on XML files of \COQ{}
1739 The current library just comprises about one thousand theorems in
1740 elementary aspects of arithmetics up to the multiplicative property for
1741 Eulers' totient function $\phi$.
1743 The library is organized in five main directories: \texttt{logic} (connectives,
1744 quantifiers, equality, \ldots), \texttt{datatypes} (basic datatypes and type
1745 constructors), \texttt{nat} (natural numbers), \texttt{Z} (integers), \texttt{Q}
1746 (rationals). The most complex development is \texttt{nat}, organized in 25
1747 scripts, listed in Tab.~\ref{tab:scripts}.
1750 \begin{tabular}{lll}
1751 \FILE{nat.ma} & \FILE{plus.ma} & \FILE{times.ma} \\
1752 \FILE{minus.ma} & \FILE{exp.ma} & \FILE{compare.ma} \\
1753 \FILE{orders.ma} & \FILE{le\_arith.ma} & \FILE{lt\_arith.ma} \\
1754 \FILE{factorial.ma} & \FILE{sigma\_and\_pi.ma} & \FILE{minimization.ma} \\
1755 \FILE{div\_and\_mod.ma} & \FILE{gcd.ma} & \FILE{congruence.ma} \\
1756 \FILE{primes.ma} & \FILE{nth\_prime.ma} & \FILE{ord.ma} \\
1757 \FILE{count.ma} & \FILE{relevant\_equations.ma} & \FILE{permutation.ma} \\
1758 \FILE{factorization.ma} & \FILE{chinese\_reminder.ma} &
1759 \FILE{fermat\_little\_th.ma} \\
1760 \FILE{totient.ma} & & \\
1762 \caption{Scripts on natural numbers in the standard library\strut}
1766 We do not plan to maintain the library in a centralized way,
1767 as most of the systems do. On the contrary we are currently
1768 developing wiki-technologies to support collaborative
1769 development of the library, encouraging people to expand,
1770 modify and elaborate previous contributions.
1772 \section{Conclusions}
1773 \label{sec:conclusion}
1778 We would like to thank all the people that during the past
1779 7 years collaborated in the \HELM{} project and contributed to
1780 the development of \MATITA{}, and in particular
1781 M.~Galat\`a, A.~Griggio, F.~Guidi, P.~Di~Lena, L.~Padovani, I.~Schena, M.~Selmi,
1786 \bibliography{matita}