\newcommand{\COQIDE}{CoqIde}
\newcommand{\ELIM}{\textsc{Elim}}
\newcommand{\GDOME}{Gdome}
+\newcommand{\GTK}{GTK+}
\newcommand{\GTKMATHVIEW}{\textsc{GtkMathView}}
\newcommand{\HELM}{Helm}
\newcommand{\HINT}{\textsc{Hint}}
\newcommand{\IR}{\ensuremath{\dR}}
\newcommand{\IZ}{\ensuremath{\dZ}}
\newcommand{\LIBXSLT}{LibXSLT}
+\newcommand{\LEGO}{Lego}
\newcommand{\LOCATE}{\textsc{Locate}}
\newcommand{\MATCH}{\textsc{Match}}
\newcommand{\MATHML}{MathML}
\fcolorbox{black}{gray}{\usebox{\tmpxyz}}
\end{center}}
-\bibliographystyle{alpha}
+\bibliographystyle{klunum}
\begin{document}
\MATITA{} is the Proof Assistant under development by the \HELM{}
team~\cite{mkm-helm} at the University of Bologna, under the direction of
-Prof.~Asperti. The paper describes the overall architecture of
+Prof.~Asperti. This paper describes the overall architecture of
the system, focusing on its most distinctive and innovative
features.
\subsection{Historical perspective}
The origins of \MATITA{} go back to 1999. At the time we were mostly
-interested to develop tools and techniques to enhance the accessibility
-via Web of formal libraries of mathematics. Due to its dimension, the
+interested in developing tools and techniques to enhance the accessibility
+via Web of libraries of formalized mathematics. Due to its dimension, the
library of the \COQ~\cite{CoqManual} proof assistant (of the order of 35'000 theorems)
was chosen as a privileged test bench for our work, although experiments
have been also conducted with other systems, and notably
-with \NUPRL~\cite{nuprl-book}.
+with \NUPRL~\cite{nuprl-book}.\TODO{citare la tesi di vincenzo(?)}
The work, mostly performed in the framework of the recently concluded
European project \MOWGLIIST{} \MOWGLI~\cite{pechino}, mainly consisted in the
following steps:
-\begin{itemize}
-\item exporting the information from the internal representation of
- \COQ{} to a system and platform independent format. Since XML was at the
-time an emerging standard, we naturally adopted this technology, fostering
-a content-centric architecture~\cite{content-centric} where the documents
-of the library were the the main components around which everything else
-has to be build;
-\item developing indexing and searching techniques supporting semantic
- queries to the library;
-\item developing languages and tools for a high-quality notational
-rendering of mathematical information\footnote{We have been
-active in the \MATHML{} Working group since 1999.};
-\end{itemize}
+\begin{enumerate}
+
+ \item exporting the information from the internal representation of
+ \COQ{} to a system and platform independent format. Since XML was at
+ the time an emerging standard, we naturally adopted that technology,
+ fostering a content-centric architecture~\cite{content-centric} where
+ the documents of the library were the the main components around which
+ everything else has to be built;
+
+ \item developing indexing and searching techniques supporting semantic
+ queries to the library;
+
+ \item developing languages and tools for a high-quality notational
+ rendering of mathematical information.\footnote{We have been active in
+ the \MATHML{} Working group since 1999.}
+
+\end{enumerate}
According to our content-centric commitment, the library exported from
\COQ{} was conceived as being distributed and most of the tools were developed
-as Web services. The user could interact with the library and the tools by
+as Web services. The user can interact with the library and the tools by
means of a Web interface that orchestrates the Web services.
-The Web services and the other tools have been implemented as front-ends
+Web services and other tools have been implemented as front-ends
to a set of software components, collectively called the \HELM{} components.
At the end of the \MOWGLI{} project we already disposed of the following
tools and software components:
\begin{itemize}
-\item XML specifications for the Calculus of Inductive Constructions,
-with components for parsing and saving mathematical objects in such a
-format~\cite{exportation-module};
-\item metadata specifications with components for indexing and querying the
-XML knowledge base;
-\item a proof checker library (i.e. the {\em kernel} of a proof assistant),
-implemented to check that we exported from the \COQ{} library all the
-logically relevant content;
-\item a sophisticated parser (used by the search engine), able to deal
-with potentially ambiguous and incomplete information, typical of the
-mathematical notation~\cite{disambiguation};
-\item a {\em refiner} library, i.e. a type inference system, based on
-partially specified terms, used by the disambiguating parser;
-\item complex transformation algorithms for proof rendering in natural
-language~\cite{remathematization};
-\item an innovative, \MATHML-compliant rendering widget for the GTK
-graphical environment~\cite{padovani}, supporting
-high-quality bidimensional
-rendering, and semantic selection, i.e. the possibility to select semantically
-meaningful rendering expressions, and to paste the respective content into
-a different text area.
+
+ \item XML specifications for the Calculus of Inductive Constructions,
+ with components for parsing and saving mathematical objects in such a
+ format~\cite{exportation-module};
+
+ \item metadata specifications with components for indexing and querying the
+ XML knowledge base;
+
+ \item a proof checker (i.e. the \emph{kernel} of a proof assistant),
+ implemented to check that we exported from the \COQ{} library all the
+ logically relevant content;
+
+ \item a sophisticated term parser (used by the search engine), able to deal
+ with potentially ambiguous and incomplete information, typical of the
+ mathematical notation~\cite{disambiguation};
+
+ \item a \emph{refiner} component, i.e. a type inference system, based on
+ partially specified terms, used by the disambiguating parser;
+
+ \item complex transformation algorithms for proof rendering in natural
+ language~\cite{remathematization};
+
+ \item an innovative, \MATHML-compliant rendering widget~\cite{padovani}
+ for the \GTK{} graphical environment,\footnote{\url{http://www.gtk.org/}}
+ supporting high-quality bidimensional
+ rendering, and semantic selection, i.e. the possibility to select semantically
+ meaningful rendering expressions, and to paste the respective content into
+ a different text area.
+
\end{itemize}
+
Starting from all this, developing our own proof assistant was not
too far away: essentially, we ``just'' had to
add an authoring interface, and a set of functionalities for the
Church enriched with primitive inductive and co-inductive data types.
Via the Curry-Howard isomorphism, the calculus can be seen as a very
rich higher order logic and proofs can be simply represented and
-stored as lambda-terms. \COQ{} and Lego are other systems that adopt
-(variations of) CIC as their foundation.
+stored as lambda-terms. \COQ{} and \LEGO~\cite{lego} are other systems
+that adopt (variations of) CIC as their foundation.
The proof language of \MATITA{} is procedural, in the tradition of the LCF
-theorem prover. \COQ, \NUPRL, PVS, Isabelle are all examples of others systems
+theorem prover~\cite{lcf}. \COQ, \NUPRL, PVS, Isabelle are all examples of
+others systems
whose proof language is procedural. Traditionally, in a procedural system
the user interacts only with the \emph{script}, while proof terms are internal
records kept by the system. On the contrary, in \MATITA{} proof terms are
-praised as declarative versions of the proof. With this role, they are the
+praised as declarative versions of the proof. Playing that role, they are the
primary mean of communication of proofs (once rendered to natural language
for human audiences).
standard way to interact with the system. Several procedural proof assistants
have either adopted or cloned Proof General as their main user interface.
The authoring interface of \MATITA{} is a clone of the Proof General interface.
+On the contrary, the interface to interact with the library is rather
+innovative and directly inspired by the Web interfaces to our Web servers.
-\TODO{item che seguono:}
-\begin{itemize}
- \item sistema indipendente (da \COQ)
- \item compatibilit\`a con sistemi legacy
-\end{itemize}
+\MATITA{} is backward compatible with the XML library of proof objects exported
+from \COQ{}, but, in order to test the actual usability of the system, we are
+also developing a new library of basic results from scratch.
\subsection{Relationship with \COQ{}}
above than the result of a deliberate design. In particular, we
(essentially) share the same foundational dialect of \COQ{} (the
Calculus of (Co)Inductive Constructions), the same implementation
-language (\OCAML{}), and the same (script based) authoring philosophy.
-However, the analogy essentially stops here and no code is shared by the
-two systems.
+language (\OCAML\footnote{\url{http://caml.inria.fr/}}),
+and the same (procedural, script based) authoring philosophy.
+However, the analogy essentially stops here and no code is shared
+between the two systems.
In a sense, we like to think of \MATITA{} as the way \COQ{} would
look like if entirely rewritten from scratch: just to give an
idea, although \MATITA{} currently supports almost all functionalities of
\COQ{}, it links 60'000 lines of \OCAML{} code, against the 166'000 lines linked
by \COQ{} (and we are convinced that, starting from scratch again,
-we could reduce our code even further in sensible way).
+we could reduce our code even further in a sensible way).
Moreover, the complexity of the code of \MATITA{} is greatly reduced with
respect to \COQ. For instance, the API of the components of \MATITA{} comprise
the parser for ambiguous mathematical notation.
The size and complexity improvements over \COQ{} must be understood
-historically. \COQ{} is a quite old
-system whose development started 20 years ago. Since then
+historically. \COQ{}\cite{CoqArt} is a quite old
+system whose development started 20 years ago. Since then,
several developers have took over the code and several new research ideas
that were not considered in the original architecture have been experimented
and integrated in the system. Moreover, there exists a lot of developments
\end{figure}
Fig.~\ref{fig:libraries} shows the architecture of the \emph{\components}
-(circle nodes) and \emph{applications} (squared nodes) developed in the HELM
-project. Each node is annotated with the number of lines of source code
-(comprising comments).
+(circle nodes) and \emph{applications} (squared nodes) developed in the
+\HELM{} project. Each node is annotated with the number of lines of
+source code (comprising comments).
-Applications and \components{} depend over other \components{} forming a
+Applications and \components{} depend on other \components{} forming a
directed acyclic graph (DAG). Each \component{} can be decomposed in
-a a set of \emph{modules} also forming a DAG.
+a set of \emph{modules} also forming a DAG.
Modules and \components{} provide coherent sets of functionalities
at different scales. Applications that require only a few functionalities
-depend on a restricted set of \components{}.
+depend on a restricted set of \components.
Only the proof assistant \MATITA{} and the \WHELP{} search engine are
applications meant to be used directly by the user. All the other applications
-are Web services developed in the HELM and MoWGLI projects and already described
-elsewhere. In particular:
+are Web services developed in the \HELM{} and \MOWGLI{} projects and already
+described elsewhere. In particular:
\begin{itemize}
- \item The \emph{\GETTER} is a Web service to retrieve an (XML) document
- from a physical location (URL) given its logical name (URI). The Getter is
- responsible of updating a table that maps URIs to URLs. Thanks to the Getter
- it is possible to work on a logically monolithic library that is physically
- distributed on the network. More information on the Getter can be found
- in~\cite{zack-master}.
- \item \emph{\WHELP} is a search engine to index and locate mathematical
- notions (axioms, theorems, definitions) in the logical library managed
- by the Getter. Typical examples of a query to Whelp are queries that search
- for a theorem that generalize or instantiate a given formula, or that
- can be immediately applied to prove a given goal. The output of Whelp is
- an XML document that lists the URIs of a complete set of candidates that
- are likely to satisfy the given query. The set is complete in the sense
- that no notion that actually satisfies the query is thrown away. However,
- the query is only approximated in the sense that false matches can be
- returned. Whelp has been described in~\cite{whelp}.
- \item \emph{\UWOBO} is a Web service that, given the URI of a mathematical
- notion in the distributed library, renders it according to the user provided
- two dimensional mathematical notation. \UWOBO{} may also embed the rendering
- of mathematical notions into arbitrary documents before returning them.
- The Getter is used by \UWOBO{} to retrieve the document to be rendered.
- \UWOBO{} has been described in~\cite{zack-master}.
- \item The \emph{Proof Checker} is a Web service that, given the URI of
- notion in the distributed library, checks its correctness. Since the notion
- is likely to depend in an acyclic way over other notions, the proof checker
- is also responsible of building in a top-down way the DAG of all
- dependencies, checking in turn every notion for correctness.
- The proof checker has been described in~\cite{zack-master}.
- \item The \emph{Dependency Analyzer} is a Web service that can produce
- a textual or graphical representation of the dependencies of an object.
- The dependency analyzer has been described in~\cite{zack-master}.
+
+ \item The \emph{\GETTER}~\cite{zack-master} is a Web service to
+ retrieve an (XML) document from a physical location (URL) given its
+ logical name (URI). The Getter is responsible of updating a table that
+ maps URIs to URLs. Thanks to the Getter it is possible to work on a
+ logically monolithic library that is physically distributed on the
+ network.
+
+ \item \emph{\WHELP}~\cite{whelp} is a search engine to index and
+ locate mathematical concepts (axioms, theorems, definitions) in the
+ logical library managed by the Getter. Typical examples of
+ \WHELP{} queries are those that search for a theorem that generalize or
+ instantiate a given formula, or that can be immediately applied to
+ prove a given goal. The output of Whelp is an XML document that lists
+ the URIs of a complete set of candidates that are likely to satisfy
+ the given query. The set is complete in the sense that no concept that
+ actually satisfies the query is thrown away. However, the query is
+ only approximated in the sense that false matches can be returned.
+
+ \item \emph{\UWOBO}~\cite{zack-master} is a Web service that, given the
+ URI of a mathematical concept in the distributed library, renders it
+ according to the user provided two dimensional mathematical notation.
+ \UWOBO{} may also inline the rendering of mathematical concepts into
+ arbitrary documents before returning them. The Getter is used by
+ \UWOBO{} to retrieve the document to be rendered.
+
+ \item The \emph{Proof Checker}~\cite{zack-master} is a Web service
+ that, given the URI of a concept in the distributed library, checks its
+ correctness. Since the concept is likely to depend in an acyclic way
+ on other concepts, the proof checker is also responsible of building
+ in a top-down way the DAG of all dependencies, checking in turn every
+ concept for correctness.
+
+ \item The \emph{Dependency Analyzer}~\cite{zack-master} is a Web
+ service that can produce a textual or graphical representation of the
+ dependencies of a concept.
+
\end{itemize}
The dependency of a \component{} or application over another \component{} can
be satisfied by linking the \component{} in the same executable.
For those \components{} whose functionalities are also provided by the
aforementioned Web services, it is also possible to link stub code that
-forwards the request to a remote Web service. For instance, the Getter
-is just a wrapper to the \GETTER{} \component{} that allows the
-\component{} to be used as a Web service. \MATITA{} can directly link the code
-of the \GETTER{} \component, or it can use a stub library with the same
-API that forwards every request to the Getter.
+forwards the request to a remote Web service. For instance, the
+\GETTER{} application is just a wrapper to the \GETTER{} \component{}
+that allows it to be used as a Web service. \MATITA{} can directly link
+the code of the \GETTER{} \component, or it can use a stub library with
+the same API that forwards every request to the Web service.
To better understand the architecture of \MATITA{} and the role of each
-\component, we can focus on the representation of the mathematical information.
-\MATITA{} is based on (a variant of) the Calculus of (Co)Inductive
-Constructions (CIC). In CIC terms are used to represent mathematical
-formulae, types and proofs. \MATITA{} is able to handle terms at
-four different levels of specification. On each level it is possible to provide
-a different set of functionalities. The four different levels are:
-fully specified terms; partially specified terms;
-content level terms; presentation level terms.
+\component, we can focus on the representation of the mathematical
+information. In CIC terms are used to represent mathematical formulae,
+types and proofs. \MATITA{} is able to handle terms at four different
+levels of specification. On each level it is possible to provide a
+different set of functionalities. The four different levels are: fully
+specified terms; partially specified terms; content level terms;
+presentation level terms.
\subsection{Fully specified terms}
\label{sec:fullyintro}
\emph{Fully specified terms} are CIC terms where no information is
missing or left implicit. A fully specified term should be well-typed.
- The mathematical notions (axioms, definitions, theorems) that are stored
+ The mathematical concepts (axioms, definitions, theorems) that are stored
in our mathematical library are fully specified and well-typed terms.
Fully specified terms are extremely verbose (to make type-checking
decidable). Their syntax is fixed and does not resemble the usual
consumption.
The \texttt{cic} \component{} defines the data type that represents CIC terms
- and provides a parser for terms stored in an XML format.
+ and provides a parser for terms stored in XML format.
The most important \component{} that deals with fully specified terms is
\texttt{cic\_proof\_checking}. It implements the procedure that verifies
\emph{conversion} judgement that verifies if two given terms are
computationally equivalent (i.e. they share the same normal form).
- Terms may reference other mathematical notions in the library.
+ Terms may reference other mathematical concepts in the library.
One commitment of our project is that the library should be physically
distributed. The \GETTER{} \component{} manages the distribution,
providing a mapping from logical names (URIs) to the physical location
- of a notion (an URL). The \texttt{urimanager} \component{} provides the URI
+ of a concept (an URL). The \texttt{urimanager} \component{} provides the URI
data type and several utility functions over URIs. The
\texttt{cic\_proof\_checking} \component{} calls the \GETTER{}
\component{} every time it needs to retrieve the definition of a mathematical
- notion referenced by a term that is being type-checked.
+ concept referenced by a term that is being type-checked.
- The Proof Checker is the Web service that provides an interface
+ The Proof Checker application is the Web service that provides an interface
to the \texttt{cic\_proof\_checking} \component.
- We use metadata and a sort of crawler to index the mathematical notions
- in the distributed library. We are interested in retrieving a notion
+ We use metadata and a sort of crawler to index the mathematical concepts
+ in the distributed library. We are interested in retrieving a concept
by matching, instantiation or generalization of a user or system provided
mathematical formula. Thus we need to collect metadata over the fully
specified terms and to store the metadata in some kind of (relational)
database for later usage. The \texttt{hmysql} \component{} provides
a simplified
- interface to a (possibly remote) MySql database system used to store the
- metadata. The \texttt{metadata} \component{} defines the data type of the
- metadata
+ interface to a (possibly remote) MySQL\footnote{\url{http://www.mysql.com/}}
+ database system used to store the metadata.
+ The \texttt{metadata} \component{} defines the data type of the metadata
we are collecting and the functions that extracts the metadata from the
- mathematical notions (the main functionality of the crawler).
+ mathematical concepts (the main functionality of the crawler).
The \texttt{whelp} \component{} implements a search engine that performs
approximated queries by matching/instantiation/generalization. The queries
operate only on the metadata and do not involve any actual matching
- (that will be described later on and that is implemented in the
- \texttt{cic\_unification} \component). Not performing any actual matching
- the query only returns a complete and hopefully small set of matching
+ (see the \texttt{cic\_unification} \component in
+ Sect.~\ref{sec:partiallyintro}). Not performing any actual matching
+ a query only returns a complete and hopefully small set of matching
candidates. The process that has issued the query is responsible of
actually retrieving from the distributed library the candidates to prune
out false matches if interested in doing so.
- The Whelp search engine is the Web service that provides an interface to
+ The \WHELP{} application is the Web service that provides an interface to
the \texttt{whelp} \component.
According to our vision, the library is developed collaboratively so that
- changing or removing a notion can invalidate other notions in the library.
- Moreover, changing or removing a notion requires a corresponding change
+ changing or removing a concept can invalidate other concepts in the library.
+ Moreover, changing or removing a concept requires a corresponding change
in the metadata database. The \texttt{library} \component{} is responsible
of preserving the coherence of the library and the database. For instance,
- when a notion is removed, all the notions that depend on it and their
+ when a concept is removed, all the concepts that depend on it and their
metadata are removed from the library. This aspect will be better detailed
in Sect.~\ref{sec:libmanagement}.
a sequent. The formers are called \emph{implicit terms} and they occur only
linearly. The latters may occur multiple times and are called
\emph{metavariables}. An \emph{explicit substitution} is applied to each
-occurrence of a metavariable. A metavariable stand for a term whose type is
+occurrence of a metavariable. A metavariable stands for a term whose type is
given by the conclusion of the sequent. The term must be closed in the
context that is given by the ordered list of hypotheses of the sequent.
The explicit substitution instantiates every hypothesis with an actual
Partially specified terms are not required to be well-typed. However a
partially specified term should be \emph{refinable}. A \emph{refiner} is
a type-inference procedure that can instantiate implicit terms and
-metavariables and that can introduce \emph{implicit coercions} to make a
+metavariables and that can introduce
+\emph{implicit coercions}~\cite{barthe95implicit} to make a
partially specified term well-typed. The refiner of \MATITA{} is implemented
in the \texttt{cic\_unification} \component. As the type checker is based on
the conversion check, the refiner is based on \emph{unification} that is
The \texttt{grafite} \component{} defines the abstract syntax tree (AST) for the
commands of the \MATITA{} proof assistant. Most of the commands are tactics.
Other commands are used to give definitions and axioms or to state theorems
-and lemmas. The \texttt{grafite\_engine} \component{} is the core of \MATITA{}.
+and lemmas. The \texttt{grafite\_engine} \component{} is the core of \MATITA.
It implements the semantics of each command in the grafite AST as a function
from status to status. It implements also an undo function to go back to
previous statuses.
those of addition over the unary representation. And addition over two natural
numbers is definitely different from addition over two real numbers.
-Formal mathematics cannot hide these differences and obliges the user to be
+Formalized mathematics cannot hide these differences and obliges the user to be
very precise on the types he is using and their representation. However,
to communicate formulae with the user and with external tools, it seems good
practice to stick to the usual imprecise mathematical ontology. In the
Mathematical Knowledge Management community this imprecise language is called
-the \emph{content level} representation of formulae.
+the \emph{content level}~\cite{adams} representation of formulae.
-In \MATITA{} we provide two translations: from partially specified terms
+In \MATITA{} we provide translations from partially specified terms
to content level terms and the other way around. The first translation can also
be applied to fully specified terms since a fully specified term is a special
case of partially specified term where no metavariable or implicit term occurs.
adopted has greatly influenced the OMDoc~\cite{omdoc} proof format that is now
isomorphic to it. Terms that represent formulae are translated to \MATHML{}
Content formulae. \MATHML{} Content~\cite{mathml} is a W3C standard
-for the representation of content level formulae in an XML extensible format.
+for the representation of content level formulae in an extensible XML format.
The translation to content level is implemented in the
\texttt{acic\_content} \component. Its input are \emph{annotated partially
proofs and terms that represent formulae. Part of it is also stored at the
content level since it is required to generate the natural language rendering
of proofs. The terms need to be maximally unshared (i.e. they must be a tree
-and not a DAG). The reason is that to the occurrences of a subterm in
-two different positions we need to associate different typing informations.
+and not a DAG). The reason is that to different occurrences of a subterm
+we need to associate different typing information.
This association is made easier when the term is represented as a tree since
it is possible to label each node with an unique identifier and associate
the typing information using a map on the identifiers.
is guided by an \emph{interpretation}, that is a function that chooses for
every ambiguous formula one partially specified term. The
\texttt{cic\_disambiguation} \component{} implements the
-disambiguation algorithm we presented in~\cite{disambiguation} that is
-responsible of building in an efficient way the set of all ``correct''
+disambiguation algorithm presented in~\cite{disambiguation} that is
+responsible of building in an efficient way the set of all correct
interpretations. An interpretation is correct if the partially specified term
obtained using the interpretation is refinable.
-In Sect.~\ref{sec:partiallyintro} the last section we described the semantics of
+In Sect.~\ref{sec:partiallyintro} we described the semantics of
a command as a
-function from status to status. We also suggested that the formulae in a
+function from status to status. We also hinted that the formulae in a
command are encoded as partially specified terms. However, consider the
command ``\texttt{replace} $x$ \texttt{with} $y^2$''. Until the occurrence
of $x$ to be replaced is located, its context is unknown. Since $y^2$ must
The elegant solution we have implemented consists in representing terms
in a command as functions from a context to a partially refined term. The
function is obtained by partially applying our disambiguation function to
-the content term to be disambiguated. Our solution should be compared with
+the content level term to be disambiguated. Our solution should be compared with
the one adopted in the \COQ{} system, where ambiguity is only relative to
De Brujin indexes.
-In \COQ{} variables can be bound either by name or by position. A term
+In \COQ, variables can be bound either by name or by position. A term
occurring in a command has all its variables bound by name to avoid the need of
-a context during disambiguation. Moreover, this makes more complex every
+a context during disambiguation. This makes more complex every
operation over terms (i.e. according to our architecture every module that
depends on \texttt{cic}) since the code must deal consistently with both kinds
-of binding. Also, this solution cannot cope with other forms of ambiguity (as
-the context dependent meaning of the exponent in the previous example).
+of binding. Moreover, this solution cannot cope with other forms of ambiguity
+(as the context dependent meaning of the exponent in the previous example).
\subsection{Presentation level terms}
\label{sec:presentationintro}
level terms. \GDOME{} \MATHML+\BOXML{} trees can be rendered by the
\GTKMATHVIEW{}
widget developed by Luca Padovani~\cite{padovani}. The widget is
-particularly interesting since it allows to implement \emph{semantic
+particularly interesting since it allows the implementation of \emph{semantic
selection}.
Semantic selection is a technique that consists in enriching the presentation
Once the rendering of a lower level term is
selected it is possible for the application to retrieve the pointer to the
lower level term. An example of applications of semantic selection is
-\emph{semantic cut\&paste}: the user can select an expression and paste it
+\emph{semantic copy \& paste}: the user can select an expression and paste it
elsewhere preserving its semantics (i.e. the partially specified term),
possibly performing some semantic transformation over it (e.g. renaming
variables that would be captured or lambda-lifting free variables).
\GETTER{} obtaining a document with fully specified terms. Then it translates
it to the presentation level passing through the content level. Finally
it returns the result document to be rendered by the user's
-browser.\NOTE{\TODO{manca la passata verso HTML}}
+browser.
The \components{} not yet described (\texttt{extlib}, \texttt{xml},
\texttt{logger}, \texttt{registry} and \texttt{utf8\_macros}) are
an \emph{authoring} interface to develop new proofs and theories. According
to its historical origins, \MATITA{} strives to provide innovative
functionalities for the interaction with the library. It is more traditional
-in its script based authoring interface.
-
-In the remaining part of the paper we focus on the user view of \MATITA{}.
-This section is devoted to the aspects of the tool that arise from the
-document centric approach to the library. Sect.~\ref{sec:authoring} describes
-the peculiarities of the authoring interface.
+in its script based authoring interface. In the remaining part of the paper we
+focus on the user view of \MATITA.
The library of \MATITA{} comprises mathematical concepts (theorems,
axioms, definitions) and notation. The concepts are authored sequentially
using scripts that are (ordered) sequences of procedural commands.
-However, once they are produced we store them independently in the library.
-The only relation implicitly kept between the notions are the logical,
+Once they are produced we store them independently in the library.
+The only relation implicitly kept between the concepts are the logical,
acyclic dependencies among them. This way the library forms a global (and
distributed) hypertext.
\begin{figure}[!ht]
\begin{center}
- \includegraphics[width=0.40\textwidth]{pics/cicbrowser-screenshot-browsing}
+ \includegraphics[width=0.45\textwidth]{pics/cicbrowser-screenshot-browsing}
\hspace{0.05\textwidth}
- \includegraphics[width=0.40\textwidth]{pics/cicbrowser-screenshot-query}
+ \includegraphics[width=0.45\textwidth]{pics/cicbrowser-screenshot-query}
\caption{Browsing and searching the library\strut}
\label{fig:cicbrowser1}
\end{center}
language rendering of proofs can be inspected
(Fig.~\ref{fig:cicbrowser2}), and content based searches on the
library can be performed (on the right of Fig.~\ref{fig:cicbrowser1}).
-Available content based searches are described in
+Content based searches are described in
Sect.~\ref{sec:indexing}. Other examples of library operations are
disambiguation of content level terms (see
Sect.~\ref{sec:disambiguation}) and automatic proof searching (see
Scripts are not seen as constituents of the library. They are not published
and indexed, so they cannot be searched or browsed using \HELM{} tools.
However, they play a central role for the maintenance of the library.
-Indeed, once a notion is invalidated, the only way to restore it is to
+Indeed, once a concept is invalidated, the only way to restore it is to
fix the possibly broken script that used to generate it.
Moreover, during the authoring phase, scripts are a natural way to
-group notions together. They also constitute a less fine grained clustering
-of notions for invalidation.
+group concepts together. They also constitute a less fine grained clustering
+of concepts for invalidation.
In the rest of this section we present in more details the functionalities of
\MATITA{} related to library management and exploitation.
in the so called \WHELP{} search engine --- have been
extensively described in~\cite{whelp}. Let us just recall here that
the \WHELP{} metadata model is essentially based a single ternary relation
-\REF{p}{s}{t} stating that an object $s$ refers an object $t$ at a
- given position $p$, where the position specify the place of the
+\REF{p}{s}{t} stating that a concept $s$ refers a concept $t$ at a
+given position $p$, where the position specify the place of the
occurrence of $t$ inside $s$ (we currently work with a fixed set of
positions, discriminating the hypothesis from the conclusion and
outermost form innermost occurrences). This approach is extremely
the search features. Here, we shall just recall some of its most
direct applications.
-A first, very simple but not negligeable feature is the check for duplicates.
+A first, very simple but not negligeable feature is the \emph{duplicate check}.
As soon as a theorem is stated, just before starting its proof,
the library is searched
to check that no other equivalent statement has been already proved
of recalling names, the naming discipline remains one of the most
annoying aspects of formal developments, and \HINT{} provides
a very friendly solution.
-In the near feature, we expect to extend the \HINT{} operation to
+
+In the near future, we expect to extend the \HINT{} query to
a \REWRITEHINT, resulting in all equational statements that
can be applied to rewrite the current goal.
translated (in multiple steps) to partially specified terms as sketched in
Sect.~\ref{sec:contentintro}.
-The key component of the translation is the generic disambiguation algorithm
+The key ingredient of the translation is the generic disambiguation algorithm
implemented in the \texttt{disambiguation} component of Fig.~\ref{fig:libraries}
-and presented in~\cite{disambiguation}. In this section we present how to use
+and presented in~\cite{disambiguation}. In this section we detail how to use
that algorithm in the context of the development of a library of formalized
mathematics. We will see that using multiple passes of the algorithm, varying
some of its parameters, helps in keeping the input terse without sacrificing
\subsubsection{Disambiguation aliases}
\label{sec:disambaliases}
-Consider the following command to state a theorem over integer numbers:
+Consider the following command that states a theorem over integer numbers:
\begin{grafite}
theorem Zlt_compat:
posing the same question in case of a future re-execution (e.g. undo/redo),
the choice must be recorded. Since scripts need to be re-executed after
invalidation, the choice record must be permanently stored somewhere. The most
-natural place is in the script itself.
+natural place is the script itself.
In \MATITA{} disambiguation is governed by \emph{disambiguation aliases}.
They are mappings, stored in the library, from ambiguity sources
preferences.
Several disambiguation parameters can vary among passes. With respect to
-preference handling we implemented three passes. In the first pass, called
+preference handling we implemented 3 passes. In the first pass, called
\emph{mono-preferences}, we consider only the aliases corresponding to the
-current preferences. In the second pass, called \emph{multi-preferences}, we
+current set of preferences. In the second pass, called
+\emph{multi-preferences}, we
consider every alias corresponding to a current or past preference. For
instance, in the example above disambiguation succeeds in the multi-preference
pass. In the third pass, called \emph{library-preferences}, all aliases
used in order to obtain a refinable partially specified term.
To address this issue, we have the ability to consider each instance of a single
-symbol as a different ambiguous expression in the content level term, and thus
-we can use a different alias for each of them. Exploiting or not this feature is
+symbol as a different ambiguous expression in the content level term,
+enabling the use of a different alias for each of them.
+Exploiting or not this feature is
one of the disambiguation pass parameters. A disambiguation pass which exploit
it is said to be using \emph{fresh instances} (opposed to a \emph{shared
instances} pass).
Fresh instances lead to a non negligible performance loss (since the choice of
an alias for one instance does not constraint the choice of the others). For
this reason we always attempt a fresh instances pass only after attempting a
-non-fresh one.
+shared instances pass.
-\paragraph{One-shot preferences} Disambiguation preferecens as seen so far are
+\paragraph{One-shot preferences} Disambiguation preferences as seen so far are
instance-independent. However, implicit preferences obtained as a result of a
disambiguation pass which uses fresh instances ought to be instance-dependent.
Informally, the set of preferences that can be respected by the disambiguator on
specified term having type: \texttt{R \TEXMACRO{to} nat \TEXMACRO{to} R}. In
order to disambiguate \texttt{power\_deriv}, the occurrence of \texttt{n} on the
right hand side of the equality need to be ``injected'' from \texttt{nat} to
-\texttt{R}. The refiner of \MATITA{} supports \emph{implicit coercions} for
+\texttt{R}. The refiner of \MATITA{} supports
+\emph{implicit coercions}~\cite{barthe95implicit} for
this reason: given as input the above presentation level term, it will return a
partially specified term where in place of \texttt{n} the application of a
coercion from \texttt{nat} to \texttt{R} appears (assuming such a coercion has
been defined in advance).
-Coercions are not always desirable. For example, in disambiguating
+Implicitc coercions are not always desirable. For example, in disambiguating
\texttt{\TEXMACRO{forall} x: nat. n < n + 1} we do not want the term which uses
-two coercions from \texttt{nat} to \texttt{R} around \OP{<} arguments to show up
+2 coercions from \texttt{nat} to \texttt{R} around \OP{<} arguments to show up
among the possible partially specified term choices. For this reason we always
attempt a disambiguation pass which require the refiner not to use the coercions
before attempting a coercion-enabled pass.
integers (which indeed does), the
theorem can be disambiguated using twice that coercion on the left hand side of
the implication. The obtained partially specified term however would not
-probably be the expected one, being a theorem which prove a trivial implication.
+probably be the expected one, being a theorem which proves a trivial
+implication.
Motivated by this and similar examples we choose to always prefer fresh
instances over implicit coercions, i.e. we always attempt disambiguation
passes with fresh instances
According to the criteria described above, in \MATITA{} we perform the
disambiguation passes depicted in Tab.~\ref{tab:disambpasses}. In
-our experience that choice gives reasonable performance and minimize the need of
-user interaction during the disambiguation.
+our experience that choice gives reasonable performance and minimizes the need
+of user interaction during the disambiguation.
\begin{table}[ht]
\caption{Disambiguation passes sequence\strut}
%store the XML encoding of the objects defined in the script, the
%disambiguation aliases and the interpretation and notational convention defined,
%while the latter is used to store all the metadata needed by
-%\WHELP{}.
+%\WHELP.
%
%While the consistency of the data store in the two media has
%nothing to do with the nature of
\subsubsection{Invalidation}
-Invalidation (see Sect.~\ref{sec:library}) is implemented in two phases.
+Invalidation (see Sect.~\ref{sec:library}) is implemented in 2 phases.
The first one is the calculation of all the concepts that recursively
-depend on the ones we are invalidating. The calculation of the
-reverse dependencies can be computed using the relational database
-that stores metadata.
+depend on the ones we are invalidating. It can be performed
+using the relational database that stores the metadata.
This technique is the same used by the \emph{Dependency Analyzer}
and is described in~\cite{zack-master}.
%the library is preserved.
To regenerate an invalidated part of the library \MATITA{} re-executes
-the script files that produced the invalidated concepts. The main
+the scripts that produced the invalidated concepts. The main
problem is to find a suitable order of execution of the scripts.
For this purpose we provide a tool called \MATITADEP{}
that takes in input the list of scripts that compose the development and
-outputs their dependencies in a format suitable for the GNU \texttt{make} tool.
+outputs their dependencies in a format suitable for the GNU \texttt{make}
+tool.\footnote{\url{http://www.gnu.org/software/make/}}
The user is not asked to run \MATITADEP{} by hand, but
simply to tell \MATITA{} the root directory of his development (where all
script files can be found) and \MATITA{} will handle all the generation
related tasks, including dependencies calculation.
To compute dependencies it is enough to look at the script files for
-disambiguation preferences declared or imported from other scripts
-(see \ref{sec:disambaliases}).
+literal of included explicit disambiguation preferences
+(see Sect.~\ref{sec:disambaliases}).
+\TODO{da rivedere: da dove salta fuori ``regenerating content''?}
Regenerating the content of a modified script file involves the preliminary
invalidation of all its old content.
Only the former is intended to be used directly by the
user, the latter is automatically invoked by \MATITA{}
-to try to regenerate parts of the library previously invalidated.
+to regenerate parts of the library previously invalidated.
+\TODO{come sopra: ``content of a script''?}
While they share the same engine for generation and invalidation, they
-provide different granularity. \MATITAC{} is only able to reexecute a
+provide different granularity. \MATITAC{} is only able to re-execute a
whole script and similarly to invalidate the whole content of a script
-(together with all the other scripts that rely on an concept defined
+(together with all the other scripts that rely on a concept defined
in it).
\subsection{Automation}
\label{sec:automation}
+In the long run, one would expect to work with a Proof Assistant
+like Matita, using only three basic tactics: Intro, Elim, and Auto
+(possibly integrated by a moderate use of Cut). The state of the art
+in automated deduction is still far away from this goal, but
+this is one of the main development direction of Matita.
+
+Even in this field, the underlying phisolophy of Matita is to
+free the user from any burden relative to the overall management
+of the library. For instance, in Coq, the user is responsible to
+define small collections of theorems to be used as a parameter
+by the Auto tactic;
+in Matita, it is the system itself that authomatically retrieves, from
+the whole library, a subset of theorems worth to be considered
+according to the signature of the current goal and context.
+
+The basic tactic merely performs an iterated use of the Apply tactic
+(with no Intro). The research tree may be pruned according to two
+main parameters: the {\em depth} (whit the obvious meaning), and the
+{\em width} that is the maximum number of (new) open goals allowed at
+any instant. Matita has only one notion of metavariable, corresponding
+to the so called existential variables of Coq; so, Matita's Auto tactic
+should be compared with Coq's EAuto.
+
+Recently we have extended automation with paramodulation based
+techniques. At present, the system works reasonably well with
+equational rewriting, where the notion of equality is parametric
+and can be specified by the user: the system only requires
+a proof of {\em reflexivity} and {\em paramodulation} (or rewriting,
+as it is usually called in the proof assistant community).
+
+Given an equational goal, Matita recovers all known equational facts
+from the library (and the local context), applying a variant of
+the so called {\em given-clause algorithm} \cite{paramodulation},
+that is the the procedure currently used by the majority of modern theorem
+provers.
+
+The given-clause algorithm is essentially composed by an alternation
+of a {\em saturation} phase, deriving new facts by a set of active
+facts and a new {\em given} clause suitably selected from a set of passive
+equations,
+and a {\em demodulation} phase that tries to simplify the equations
+orienting them according to a suitable weight associated with terms.
+Matita currently supports several different weigthing functions
+comprising Knuth-Bendix ordering (kbo) and recursive path ordering (rpo),
+that integrates particualry well with normalization.
+
+Demodulation alone is already a quite powerful technique, and
+it has been turned into a tactic by itself: the {\em demodulate}
+tactic, which can be seen as a kind of generalization of {\em simplify}.
+The following portion of script describes two
+interesting cases of application of this tactic (both of them relying
+on elementary arithmetic equations):
+
+\begin{verbatim}
+theorem example1:
+ \forall x:nat. (x+1)*(x-1)=x*x - 1.
+intro.
+apply (nat_case x)
+[simplify;reflexivity
+|intro;demodulate;reflexivity]
+qed.
+
+theorem example2:
+ \forall x,y:nat. (x+y)*(x+y) = x*x + 2*x*y + y*y.
+intros;demodulate;reflexivity.
+qed.
+\end{verbatim}
+
+In the future we expect to integrate applicative and equational
+rewriting. In particular, the overall idea would be to integrate
+applicative rewriting with demodulation, treating saturation as an
+operation to be performed in batch mode, e.g. during the night.
+
-\TODO{sezione sull'automazione}
\subsection{Naming convention}
\label{sec:naming}
A minor but not entirely negligible aspect of \MATITA{} is that of
-adopting a (semi)-rigid naming convention for identifiers, derived by
+adopting a (semi)-rigid naming convention for concept names, derived by
our studies about metadata for statements.
-The convention is only applied to identifiers for theorems
-(not definitions), and relates the name of a proof to its statement.
+The convention is only applied to theorems
+(not definitions), and relates theorem names to their statements.
The basic rules are the following:
\begin{itemize}
-\item each identifier is composed by an ordered list of (short)
-names occurring in a left to right traversal of the statement;
-\item all identifiers should (but this is not strictly compulsory)
-separated by an underscore,
-\item identifiers in two different hypothesis, or in an hypothesis
-and in the conclusion must be separated by the string ``\verb+_to_+'';
-\item the identifier may be followed by a numerical suffix, or a
-single or double apostrophe.
+
+ \item each name is composed by an ordered list of (short)
+ identifiers occurring in a left to right traversal of the statement;
+
+ \item all names should (but this is not strictly compulsory)
+ separated by an underscore;
+
+ \item names occurring in 2 different hypotheses, or in an hypothesis
+ and in the conclusion must be separated by the string \texttt{\_to\_};
+
+ \item the identifier may be followed by a numerical suffix, or a
+ single or double apostrophe.
\end{itemize}
-Take for instance the theorem
-\[\forall n:nat. n = plus \; n\; O\]
-Possible legal names are: \verb+plus_n_O+, \verb+plus_O+,
-\verb+eq_n_plus_n_O+ and so on.
-Similarly, consider the theorem
-\[\forall n,m:nat. n<m \to n \leq m\]
-In this case \verb+lt_to_le+ is a legal name,
-while \verb+lt_le+ is not.\\
+
+Take for instance the statement:
+\begin{grafite}
+ \forall n: nat. n = plus n O
+\end{grafite}
+Possible legal names are: \texttt{plus\_n\_O}, \texttt{plus\_O},
+\texttt{eq\_n\_plus\_n\_O} and so on.
+
+Similarly, consider the theorem:
+\begin{grafite}
+ \forall n, m: nat. n < m to n \leq m
+\end{grafite}
+In this case \texttt{lt\_to\_le} is a legal name,
+while \texttt{lt\_le} is not.
+
But what about, say, the symmetric law of equality? Probably you would like
to name such a theorem with something explicitly recalling symmetry.
The correct approach,
in this case, is the following. You should start with defining the
-symmetric property for relations
-
-\[definition\;symmetric\;= \lambda A:Type.\lambda R.\forall x,y:A.R x y \to R y x \]
+symmetric property for relations:
+\begin{grafite}
+definition symmetric =
+ \lambda A: Type. \lambda R. \forall x, y: A.
+ R x y \to R y x
+\end{grafite}
+Then, you may state the symmetry of equality as:
+\begin{grafite}
+\forall A: Type. symmetric A (eq A)
+\end{grafite}
+and \texttt{symmetric\_eq} is a legal name for such a theorem.
-Then, you may state the symmetry of equality as
-\[ \forall A:Type. symmetric \;A\;(eq \; A)\]
-and \verb+symmetric_eq+ is valid \MATITA{} name for such a theorem.
So, somehow unexpectedly, the introduction of semi-rigid naming convention
has an important beneficial effect on the global organization of the library,
-forcing the user to define abstract notions and properties before
+forcing the user to define abstract concepts and properties before
using them (and formalizing such use).
Two cases have a special treatment. The first one concerns theorems whose
conclusion is a (universally quantified) predicate variable, i.e.
theorems of the shape
-$\forall P,\dots.P(t)$.
-In this case you may replace the conclusion with the word
-``elim'' or ``case''.
-For instance the name \verb+nat_elim2+ is a legal name for the double
+$\forall P,\dots,.P(t)$.
+In this case you may replace the conclusion with the string
+\texttt{elim} or \texttt{case}.
+For instance the name \texttt{nat\_elim2} is a legal name for the double
induction principle.
The other special case is that of statements whose conclusion is a
match expression.
-A typical example is the following
-\begin{verbatim}
- \forall n,m:nat.
- match (eqb n m) with
- [ true \Rightarrow n = m
- | false \Rightarrow n \neq m]
-\end{verbatim}
-where $eqb$ is boolean equality.
+A typical example is the following:
+\begin{grafite}
+\forall n,m:nat.
+ match (eqb n m) with
+ [ true \Rightarrow n = m
+ | false \Rightarrow n \neq m]
+\end{grafite}
+where \texttt{eqb} is boolean equality.
In this cases, the name can be build starting from the matched
-expression and the suffix \verb+_to_Prop+. In the above example,
-\verb+eqb_to_Prop+ is accepted.
+expression and the suffix \texttt{\_to\_Prop}. In the above example,
+\texttt{eqb\_to\_Prop} is accepted.
\section{The authoring interface}
\label{sec:authoring}
The authoring interface of \MATITA{} is very similar to Proof General. We
chose not to build the \MATITA{} UI over Proof General for two reasons. First
of all we wanted to integrate our XML-based rendering technologies, mainly
-\GTKMATHVIEW{}. At the time of writing Proof General supports only text based
+\GTKMATHVIEW. At the time of writing Proof General supports only text based
rendering.\footnote{This may change with the future release of Proof General
based on Eclipse, but is not yet the case.} The second reason is that we wanted
to build the \MATITA{} UI on top of a state-of-the-art and widespread toolkit
-as GTK is.
+as \GTK{} is.
Fig.~\ref{fig:screenshot} is a screenshot of the \MATITA{} authoring interface,
featuring two windows. The background one is very like to the Proof General
%theorem valid_name: \forall n,m. m + n = n \to m = O.
% intros (n m H).
%\end{grafite}
-%\noindent
+
Consider the following sequent
\sequent{
n:nat\\
H: m + n = n}{
m=O
}
-\noindent
+
To change the right part of the equivalence of the $H$
hypothesis with $O + n$ the user selects and pastes it as the pattern
in the following statement.
\begin{grafite}
change in H:(? ? ? %) with (O + n).
\end{grafite}
-\noindent
+
To understand the pattern (or produce it by hand) the user should be
aware that the notation $m+n=n$ hides the term $(eq~nat~(m+n)~n)$, so
that the pattern selects only the third argument of $eq$.
\begin{grafite}
change in H match n with (O + n).
\end{grafite}
-\noindent
+
In this case the $\NT{sequent\_path}$ selects the whole $H$, while
the second phase locates $n$.
\begin{grafite}
change in H:(? ? (? ? %) %) with (O + n).
\end{grafite}
-\noindent
\subsubsection{Tactics supporting patterns}
+\TODO{Grazie ai pattern, rispetto a Coq noi abbiamo per esempio la possibilita' di fare riduzioni profonde!!!}
+
\TODO{mergiare con il successivo facendo notare che i patterns sono una
interfaccia comune per le tattiche}
immediately structure the proof or postpone the structuring.
If you decide for the former you have to apply the branching tactical and write
at once tactics for all the cases. Since the user does not even know the
-generated goals yet, she can only replace all the cases with the identity
+generated goals yet, he can only replace all the cases with the identity
tactic and execute the command, just to receive feedback on the first
-goal. Then she has to go one step back to replace the first identity
+goal. Then he has to go one step back to replace the first identity
tactic with the wanted one and repeat the process until all the
branches are closed.
For instance, reconsider the previous example of a proof by induction.
With step-by-step tacticals the user can apply the induction principle, and just
-open the branching tactical ``\texttt{[}''. Then she can interact with the
+open the branching tactical ``\texttt{[}''. Then he can interact with the
system until the proof of the first case is terminated. After that
``\texttt{|}'' is used to move to the next goal, until all goals are
closed. After the last goal, the user closes the branching tactical with
\theendnotes
+\TODO{rivedere bibliografia, \'e un po' povera}
+
+\TODO{aggiungere entry per le coercion implicite}
+
\bibliography{matita}
\end{document}