\end{opening}
\tableofcontents
+\listoffigures
\section{Introduction}
\label{sec:intro}
\cite{mkm-helm} at the University of Bologna, under the direction of
Prof.~Asperti. \\
The paper describes the overall architecture of
-the system, focusing on its most distintive and innovative
+the system, focusing on its most distinctive and innovative
features.
\subsection{Historical Perspective}
interested to develop tools and techniques to enhance the accessibility
via Web of formal libraries of mathematics. Due to its dimension, the
library of the \COQ~\cite{CoqManual} proof assistant (of the order of 35'000 theorems)
-was choosed as a privileged test bench for our work, although experiments
+was chosen as a privileged test bench for our work, although experiments
have been also conducted with other systems, and notably
-with \NUPRL{}\cite{nuprl-book}.
+with \NUPRL~\cite{nuprl-book}.
The work, mostly performed in the framework of the recently concluded
European project IST-33562 \MOWGLI{}~\cite{pechino}, mainly consisted in the
following steps:
has to be build;
\item developing indexing and searching techniques supporting semantic
queries to the library;
-%these efforts gave birth to our \WHELP{}
-%search engine, described in~\cite{whelp};
\item developing languages and tools for a high-quality notational
rendering of mathematical information\footnote{We have been
active in the \MATHML{} Working group since 1999.};
-%and developed inside
-%\HELM{} a \MATHML-compliant widget for the GTK graphical environment
-%which can be integrated in any application.
\end{itemize}
According to our content-centric commitment, the library exported from
\subsection{The system}
\MATITA{} is a proof assistant (also called interactive theorem prover).
-It is based on the Calculus of (Co)Inductive Constructions (CIC) that
-is a dependently typed lambda-calculus \`a la Church enriched with primitive
-inductive and co-indutive data types. Via the Curry-Howard isomorphism, the
-calculus can be seen as a very rich higher order logic and proofs can be
-simply represented and stored as lambda-terms. Coq and Lego are other systems
-that adopt (variations of) CIC as their foundation.
+It is based on the Calculus of (Co)Inductive Constructions
+(CIC)~\cite{Werner} that is a dependently typed lambda-calculus \`a la
+Church enriched with primitive inductive and co-inductive data types.
+Via the Curry-Howard isomorphism, the calculus can be seen as a very
+rich higher order logic and proofs can be simply represented and
+stored as lambda-terms. \COQ{} and Lego are other systems that adopt
+(variations of) CIC as their foundation.
The proof language of \MATITA{} is procedural, in the tradition of the LCF
theorem prover. Coq, NuPRL, PVS, Isabelle are all examples of others systems
primary mean of communication of proofs (once rendered to natural language
for human audiences).
-The user interfaces now adopted by all the proof assistants that adopt a
+The user interfaces now adopted by all the proof assistants based on a
procedural proof language have been inspired by the CtCoq pioneering
-system~\cite{ctcoq}. One succesfull incarnation of the ideas introduced
+system~\cite{ctcoq1}. One successful incarnation of the ideas introduced
by CtCoq is the Proof General generic interface~\cite{proofgeneral},
that has set a sort of
standard way to interact with the system. Several procedural proof assistants
have either adopted or cloned Proof General as their main user interface.
-\MATITA{} has also cloned the Proof General interface,
+The authoring interface of \MATITA{} is a clone of the Proof General interface.
\begin{itemize}
- \item scelta del sistema fondazionale
+ \item scelta del sistema fondazional.
\item sistema indipendente (da \COQ)
\item compatibilit\`a con sistemi legacy
\end{itemize}
more the effect of the circumstances of its creation described
above than the result of a deliberate design. In particular, we
(essentially) share the same foundational dialect of \COQ{} (the
-Calculus of (Co)Inductive Constructions), the same implementative
+Calculus of (Co)Inductive Constructions), the same implementation
language (\OCAML{}), and the same (script based) authoring philosophy.
However, the analogy essentially stops here and no code is shared by the
two systems.
idea, although \MATITA{} currently supports almost all functionalities of
\COQ{}, it links 60'000 lines of \OCAML{} code, against the 166'000 lines linked
by \COQ{} (and we are convinced that, starting from scratch again,
-we could furtherly reduce our code in sensible way).
+we could reduce our code even further in sensible way).
Moreover, the complexity of the code of \MATITA{} is greatly reduced with
respect to \COQ. For instance, the API of the components of \MATITA{} comprise
989 functions, to be compared with the 4'286 functions of \COQ.
-Finally, \MATITA{} has several innovatives features over \COQ{} that derive
+FINQUI SPELL CHECKATO
+
+Finally, \MATITA{} has several innovative features over \COQ{} that derive
from the integration of Mathematical Knowledge Management tools with proof
assistants. Among them, the advanced indexing tools over the library and
the parser for ambiguous mathematical notation.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.9\textwidth,height=0.8\textheight]{libraries-clusters}
- \caption{\MATITA{} components}
+ \caption[\MATITA{} components and related applications]{\MATITA{}
+ components and related applications, with thousands of line of
+ codes (klocs)}
\label{fig:libraries}
\end{center}
\end{figure}
Fig.~\ref{fig:libraries} shows the architecture of the \emph{\components}
(circle nodes) and \emph{applications} (squared nodes) developed in the HELM
-project.
+project. Each node is annotated with the number of lines of source code
+(comprising comments).
Applications and \components{} depend over other \components{} forming a
directed acyclic graph (DAG). Each \component{} can be decomposed in
For those \components{} whose functionalities are also provided by the
aforementioned Web services, it is also possible to link stub code that
forwards the request to a remote Web service. For instance, the Getter
-is just a wrapper to the \GETTER \component{} that allows the
+is just a wrapper to the \GETTER{} \component{} that allows the
\component{} to be used as a Web service. \MATITA{} can directly link the code
-of the \GETTER \component, or it can use a stub library with the same
+of the \GETTER{} \component, or it can use a stub library with the same
API that forwards every request to the Getter.
To better understand the architecture of \MATITA{} and the role of each
content level terms; presentation level terms.
\subsection{Fully specified terms}
-\label{fully-spec}
+\label{sec:fullyspec}
+
\emph{Fully specified terms} are CIC terms where no information is
missing or left implicit. A fully specified term should be well-typed.
The mathematical notions (axioms, definitions, theorems) that are stored
Terms may reference other mathematical notions in the library.
One commitment of our project is that the library should be physically
- distributed. The \GETTER \component{} manages the distribution,
+ distributed. The \GETTER{} \component{} manages the distribution,
providing a mapping from logical names (URIs) to the physical location
of a notion (an URL). The \texttt{urimanager} \component{} provides the URI
data type and several utility functions over URIs. The
in Sect.~\ref{sec:libmanagement}.
\subsection{Partially specified terms}
+\label{sec:partspec}
+
\emph{Partially specified terms} are CIC terms where subterms can be omitted.
Omitted subterms can bear no information at all or they may be associated to
a sequent. The formers are called \emph{implicit terms} and they occur only
interpretations. An interpretation is correct if the partially specified term
obtained using the interpretation is refinable.
-In the last section we have described the semantics of a command as a
+In Sect.~\ref{sec:partspec} the last section we described the semantics of a
+command as a
function from status to status. We also suggested that the formulae in a
command are encoded as partially specified terms. However, consider the
command ``\texttt{replace} $x$ \texttt{with} $y^2$''. Until the occurrence
context of the occurrence $x$ it must replace.
The elegant solution we have implemented consists in representing terms
-in a command as function from a context to a partially refined term. The
+in a command as functions from a context to a partially refined term. The
function is obtained by partially applying our disambiguation function to
the content term to be disambiguated. Our solution should be compared with
-the one adopted in the Coq system (where ambiguity is only relative to
-DeBrujin indexes). In Coq variables can be bound either by name or by
-position. This makes more complex every operation over terms (i.e. according
-to our architecture every module that depends on \texttt{cic}). Moreover,
-this solution cannot cope with other forms of ambiguity (as the meaning
-of the $~^2$ exponent in the previous example that depends on the context).
+the one adopted in the Coq system, where ambiguity is only relative to De Brujin
+indexes. In Coq variables can be bound either by name or by position. A term
+occurring in a command has all its variables bound by name to avoid the need of
+a context during disambiguation. Moreover, this makes more complex every
+operation over terms (i.e. according to our architecture every module that
+depends on \texttt{cic}) since the code must deal consistently with both kinds
+of binding. Also, this solution cannot cope with other forms of ambiguity (as
+the context dependent meaning of the exponent in the previous example).
\subsection{Presentation level terms}
against the \texttt{grafite\_parser} \components{}
since they provide an interface to the user. In both cases the formulae
written by the user are parsed using the \texttt{content\_pres} \component{} and
-then disambiguated using the \texttt{cic\_disambiguation} \component.
-However, only \MATITA{} is linked against the \texttt{grafite\_engine} and
-\texttt{tactics} components since \WHELP{} can only execute those ASTs that
-correspond to queries (implemented in the \texttt{whelp} component).
+then disambiguated using the \texttt{cic\_disambiguation} \component. However,
+only \MATITA{} is linked against the \texttt{grafite\_engine} and
+\texttt{tactics} components (summing up to a total of 11'200 lines of code)
+since \WHELP{} can only execute those ASTs that correspond to queries
+(implemented in the \texttt{whelp} component).
The \UWOBO{} Web service wraps the \texttt{content\_pres} \component,
providing a rendering service for the documents in the distributed library.
\section{The interface to the library}
+\label{sec:library}
A proof assistant provides both an interface to interact with its library and
an \emph{authoring} interface to develop new proofs and theories. According
acyclic dependencies among them. This way the library forms a global (and
distributed) hypertext. Several useful operations can be implemented on the
library only, regardless of the scripts. Examples of such operations
-implemented in \MATITA{} are: searching and browing (see Sect.~\ref{sec:indexing});
+implemented in \MATITA{} are: searching and browsing (see Sect.~\ref{sec:indexing});
disambiguation of content level terms (see Sect.~\ref{sec:disambiguation});
automatic proof searching (see Sect.~\ref{sec:automation}).
-A requisite for the previous operations is that the library must
+The key requisite for the previous operations is that the library must
be fully accessible and in a logically consistent state. To preserve
consistency, a concept cannot be altered or removed unless the part of the
library that depends on it is modified accordingly. To allow incremental
database.
For non collaborative development, full versioning can be avoided, but
-invalidation is still required. Since nobody else is relying on your
-development, you are free to change and invalidate part of the library
+invalidation is still required. Since nobody else is relying on the
+user development, the user is free to change and invalidate part of the library
without branching. Invalidation is still necessary to avoid using a
concept that is no longer valid.
So far, in \MATITA{} we address only this non collaborative scenario
group notions together. They also constitute a less fine grained clustering
of notions for invalidation.
-In the following sections we present in more details the functionalities
-of \MATITA{} related to library management and exploitation.
-
-
+In the rest of this section we present in more details the functionalities of
+\MATITA{} related to library management and exploitation.
+Sect.~\ref{sec:authoring} is devoted to the description of the peculiarities of
+the \MATITA{} authoring interface.
\subsection{Indexing and searching}
\label{sec:indexing}
\subsubsection{Disambiguation aliases}
\label{sec:disambaliases}
-Let's start with the definition of the ``strictly greater then'' notion over
+Let us start with the definition of the ``strictly greater then'' notion over
(Peano) natural numbers.
\begin{grafite}
The \texttt{include} statement adds the requirement that the part of the library
defining the notion of natural numbers should be defined before
-processing the what follows. Note indeed that the algorithm presented
+processing what follows. Note indeed that the algorithm presented
in~\cite{disambiguation} does not describe where interpretations for ambiguous
expressions come from, since it is application-specific. As a first
approximation, we will assume that in \MATITA{} they come from the library (i.e.
the alias for \OP{<} points to the integer version of the operator, no
refinable partially specified term matching the term could be found.
-For this reason we choosed to attempt \emph{multiple disambiguation passes}. A
-first pass attempt to disambiguate using the last available disambiguation
-aliases (\emph{mono aliases} pass), in case of failure the next pass try again
-the disambiguation forgetting the aliases and using the whole library to
+For this reason we chose to attempt \emph{multiple disambiguation passes}. A
+first pass attempts to disambiguate using the last available disambiguation
+aliases (\emph{mono aliases} pass); in case of failure the next pass tries
+disambiguation again forgetting the aliases and using the whole library to
retrieve interpretation for ambiguous expressions (\emph{library aliases} pass).
Since the latter pass may lead to too many choices we intertwined an additional
pass among the two which use as interpretations all the aliases coming for
\subsubsection{Operator instances}
-Let's suppose now we want to define a theorem relating ordering relations on
+Let us suppose now we want to define a theorem relating ordering relations on
natural and integer numbers. The way we would like to write such a theorem (as
we can read it in the \MATITA{} standard library) is:
an \emph{i}-th instance of a symbol should be mapped to a given partially
specified term. Instance-depend aliases are meaningful only for the term whose
disambiguation generated it. For this reason we call them \emph{one-shot
-aliases} and \MATITA{} doesn't use it to disambiguate further terms down in the
+aliases} and \MATITA{} does not use it to disambiguate further terms down in the
script.
\subsubsection{Implicit coercions}
-Let's now consider a (rather hypothetical) theorem about derivation:
+Let us now consider a theorem about derivation:
\begin{grafite}
theorem power_deriv:
and suppose there exists a \texttt{R \TEXMACRO{to} nat \TEXMACRO{to} R}
interpretation for \OP{\^}, and a real number interpretation for \OP{*}.
-Mathematichians would write the term that way since it is well known that the
+Mathematichans would write the term that way since it is well known that the
natural number \texttt{n} could be ``injected'' in \IR{} and considered a real
number for the purpose of real multiplication. The refiner of \MATITA{} supports
\emph{implicit coercions} for this reason: given as input the above content
(assuming it has been defined as such of course).
Nonetheless coercions are not always desirable. For example, in disambiguating
-\texttt{\TEXMACRO{forall} x: nat. n < n + 1} we don't want the term which uses
+\texttt{\TEXMACRO{forall} x: nat. n < n + 1} we do not want the term which uses
two coercions from \texttt{nat} to \texttt{R} around \OP{<} arguments to show up
among the possible partially specified term choices. For this reason in
\MATITA{} we always try first a disambiguation pass which require the refiner
\subsubsection{Disambiguation passes}
According to the criteria described above in \MATITA{} we choose to perform the
-sequence of disambiguation passes depicted in Tab.~\ref{tab:disambpasses}. In
-our experience that choice implements a good trade off among disambiguation time
-and admitted ambiguity in terms input by users.
+sequence of disambiguation passes depicted in Tab.~\ref{tab:disambpasses}. In
+our experience that choice gives reasonable performance and minimize the need of
+user interaction during the disambiguation.
\begin{table}[ht]
\caption{Sequence of disambiguation passes used in \MATITA.\strut}
-\subsection{Compilation and cleaning}
+\subsection{Generation and Invalidation}
\label{sec:libmanagement}
-%
-%goals: consentire sviluppo di una librearia mantenendo integrita' referenziale e usando le teconologie nostre (quindi con metadati, XML, libreria visibile)
-%\subsubsection{Composition}
-%scripts.ma, .moo, XML, metadata
-%\subsubsection{Compilation}
-%analogie con compilazione classica dso.\\
-%granularita' differenti per uso interattivo e non
-%\paragraph{Batch}
-%- granularita' .ma/buri \\
-%-- motivazioni\\
-%- come si calcolano le dipendenze\\
-%- quando la si usa\\
-%- metodi (cc e clean)\\
-%- garanzie
-%\paragraph{Interactive}
-%- granularita' fine\\
-%-- motivazioni
-%\label{sec:libmanagement}
-%consistenza: integrita' referenziale
-%Goals: mantenere consistente la rappresentazione della libreria su
-%memoria persistente consentendo di compilare e pulire le compilation
-%unit (.ma).\\
-%Vincoli: dipendenze oggetti-oggetti e metadati-oggetti\\
-%Due livelli di gestione libreria, uno e' solo in fase interattiva dove la compilazione e' passo passo: \\
-%--- granularita' oggetto per matita interactive\\
-%--- granularita' baseuri (compilation unit) per la libreria\\
-%In entrmbi i casi ora:\\
-%--- matitaSync: add, remove, timetravel(facility-macro tra 2 stati)[obj]\\
-%--- matitaCleanLib: clean\_baseuri (che poi usa matitaSync a sua volta)[comp1]\\
-%Vincoli di add: typecheck ( ==$>$ tutto quello che usa sta in lib)\\
-%Vincoli di remove: \\
-%--- la remove di mSync non li controlla (ma sa cosa cancellare per ogni uri)\\
-%--- la clean\_baseuri calcola le dipendenze con i metadati (o anche i moo direi) e li rispetta\\
-%Undo di matita garantisce la consistenza a patto che l'history che tiene sia ok\\
-%Undo della lib (mClean) garantisce la consistenza (usando moo o Db).\\
-
The aim of this section is to describe the way \MATITA{}
preserves the consistency and the availability of the library
-using the \WHELP{} technology, in response to the user addition or
+using the \WHELP{} technology, in response to the user alteration or
removal of mathematical objects.
-As already sketched in \ref{fully-spec} the output of the
-compilation of a script is split among two storage media, a
+As already sketched in Sec.~\ref{sec:fullyspec} what we generate
+from a script is split among two storage media, a
classical filesystem and a relational database. The former is used to
store the XML encoding of the objects defined in the script, the
disambiguation aliases and the interpretation and notational convention defined,
\subsubsection{Batch vs Interactive}
-\MATITA{} includes an interactive graphical interface and a batch
-compiler (\MATITAC). Only the former is intended to be used directly by the
+\MATITA{} includes an interactive authoring interface and a batch
+``compiler'' (\MATITAC). Only the former is intended to be used directly by the
user, the latter is automatically invoked when a
part of the user development is required (for example issuing an
\texttt{include} command) but not yet compiled.
\section{The authoring interface}
\label{sec:authoring}
-\begin{figure}[t]
+The authoring interface of \MATITA{} is very similar to Proof General. We
+chose not to build the \MATITA{} UI over Proof General for two reasons. First
+of all we wanted to integrate our XML-based rendering technologies, mainly
+\GTKMATHVIEW{}. At the time of writing Proof General supports only text based
+rendering.\footnote{This may change with the future release of Proof General
+based on Eclipse, but is not yet the case.} The second reason is that we wanted
+to build the \MATITA{} UI on top of a state-of-the-art and widespread toolkit
+as GTK is.
+
+Fig.~\ref{fig:screenshot} is a screenshot of the \MATITA{} authoring interface,
+featuring two windows. The background one is very like to the Proof General
+interface. The main difference is that we use the \GTKMATHVIEW{} widget to
+render sequents. Since \GTKMATHVIEW{} renders \MATHML{} markup we take
+advantage of the whole bidimensional mathematical notation.
+
+The foreground window, also implemented around \GTKMATHVIEW, is called
+``cicBrowser''. It is used to browse the library, including the proof being
+developed, and enable content based search over it. Proofs are rendered in
+natural language, automatically generated from the low-level lambda-terms,
+using techniques inspired by \cite{natural,YANNTHESIS} and already described
+in~\cite{remathematization}.
+
+Note that the syntax used in the script view is \TeX-like, however unicode is
+fully supported so that mathematical glyphs can be input as such.
+
+\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.95\textwidth]{matita-screenshot}
\caption{\MATITA{} look and feel}
\end{center}
\end{figure}
-\MATITA{} has a script based user interface (UI) for proof authoring,
-based on the paradigm already exploited by other widespread user
-interfaces for theorem provers like the Proof
-General~\cite{proofgeneral} generic interface. We choosed not to
-build \MATITA{} UI over Proof General for two reasons. First because
-we had the need to integrate our rendering technologies --- mainly
-\GTKMATHVIEW{} --- based on XML markup languages whereas at the time
-of writing Proof General supports only text based
-rendering.\footnote{This may change with the future release of Proof
-General based on Eclipse, but is not yet the case.} The second reason
-is that we would like \MATITA{} UI to be built on top of a
-state-of-the-art and widespread toolkit as GTK is.
-
-Fig.~\ref{fig:screenshot} is a screenshot of the \MATITA{} UI. As can
-be seen it is split in two main windows: on the left a textual widget
-is used to edit the script, on the right the list of open goal is
-shown using \GTKMATHVIEW. A distinguished part of the script (shaded
-in the screenshot) contains the commands already executed and can't be
-edited without undoing them. The remaining part can be freely edited
-and commands from that part can be executed moving down the execution
-point. An additional window --- the ``cicBrowser'' --- can be used to
-browse the library, including the proof being developed, and enable
-content based search over it. In the cicBrowser proofs are rendered in
-natural language, automatically generated from the low-level
-$\lambda$-terms using techniques inspired by
-\cite{natural,YANNTHESIS}.
-
Since the concepts of script based proof authoring are well-known, the
remaining part of this section is dedicated to the distinguishing
features of the \MATITA{} authoring interface.
\subsection{Direct manipulation of terms}
-While terms are input as \TeX-like formulae in \MATITA, they're converted to a
+While terms are input as \TeX-like formulae in \MATITA, they are converted to a
mixed \MATHML+\BOXML{} markup for output purposes and then rendered by
-\GTKMATHVIEW. This mixed choice~\cite{latexmathml} enables both high-quality
-bidimensional rendering of terms (including the use of fancy layout schemata
-like radicals and matrices~\cite{mathml}) and the use of a concise and
-widespread textual syntax.
-
-\begin{figure}[t]
- \begin{center}
- \includegraphics[width=0.40\textwidth]{matita-screenshot-selection}
- \hspace{0.05\textwidth}
- \raisebox{0.4cm}{\includegraphics[width=0.50\textwidth]{matita-screenshot-href}}
- \caption{Semantic selection and hyperlinks}
- \label{fig:directmanip}
- \end{center}
-\end{figure}
+\GTKMATHVIEW. As described in~\cite{latexmathml} this mixed choice enables both
+high-quality bidimensional rendering of terms (including the use of fancy
+layout schemata like radicals and matrices) and the use of a
+concise and widespread textual syntax.
Keeping pointers from the presentations level terms down to the
partially specified ones \MATITA{} enable direct manipulation of
rendered (sub)terms in the form of hyperlinks and semantic selection.
+
\emph{Hyperlinks} have anchors on the occurrences of constant and
inductive type constructors and point to the corresponding definitions
in the library. Anchors are available notwithstanding the use of
encoding complex notations retain all the hyperlinks of constants or
constructors used in the notation.
-\emph{Semantic selection} enable the selection of mixed
+\emph{Semantic selection} enables the selection of mixed
\MATHML+\BOXML{} markup, constraining the selection to markup
representing meaningful CIC (sub)terms. In the example on the left of
Fig.~\ref{fig:directmanip} is thus possible to select the subterm
$\mathrm{prime}~n$, whereas it would not be possible to select
-$\forall~n:nat$ since the former denotes an application while the
-latter denotes an incomplete $\Pi$-binder. Once a (sub)term has been
-selected that way actions can be done on it like reductions or tactic
+$\to n$ since the former denotes an application while the
+latter it not a subterm. Once a meaningful (sub)term has been
+selected actions can be done on it like reductions or tactic
applications.
+\begin{figure}[t]
+ \begin{center}
+ \includegraphics[width=0.40\textwidth]{matita-screenshot-selection}
+ \hspace{0.05\textwidth}
+ \raisebox{0.4cm}{\includegraphics[width=0.50\textwidth]{matita-screenshot-href}}
+ \caption{Semantic selection and hyperlinks}
+ \label{fig:directmanip}
+ \end{center}
+\end{figure}
+
+
\subsection{Patterns}
-In our experience working with direct manipulation of terms is really
-effective and is faster than typing the corresponding textual commands
-when they contain non-trivial terms. Nonetheless we need a way to
-represent term selections in scripts so that they can be batch
-compiled by \MATITAC.
+In several situations working with direct manipulation of terms is
+simpler and faster than typing the corresponding textual
+commands~\cite{proof-by-pointing}.
+Nonetheless we need to record actions and selections in scripts.
-In \MATITA{} \emph{patterns} are such representation: each possible
-semantic selection in \GTKMATHVIEW{} can be represented as a textual
-pattern and batchly processed. Users can select using the GUI and then
-ask the system to paste the corresponding pattern in this script, but
-more often this process is transparent: once an action is performed on
-a selection, the corresponding textual command is computed and
-inserted in the script.
+In \MATITA{} \emph{patterns} are textual representations of selections.
+Users can select using the GUI and then ask the system to paste the
+corresponding pattern in this script, but more often this process is
+transparent: once an action is performed on a selection, the corresponding
+textual command is computed and inserted in the script.
\subsubsection{Pattern syntax}
$\%$. \NT{wanted} is a term that lives in the context of the
placeholders.
-Since there exists a bijection between sequent paths and visual
-selections, \MATITA{} uses only the \NT{sequent\_path} part to
-represent selections in scripts. The \NT{wanted} part of the syntax
-is meant to help the users in writing concise and elegant patterns.
+Textual patterns produced from a graphical selection are made of the
+\NT{sequent\_path} only. Such patterns can represent every selection,
+but are quite verbose. The \NT{wanted} part of the syntax is meant to
+help the users in writing concise and elegant patterns by hand.
\begin{table}
\caption{\label{tab:pathsyn} Patterns concrete syntax.\strut}
the source of an arrow and the head of the application that is
found in the arrow target.
- The first phase selects not only terms (roots of subterms) but also
- their context that will be eventually used in the second phase.
+ The first phase not only selects terms (roots of subterms) but
+ determines also their context that will be eventually used in the
+ second phase.
\item[Phase 2]
plays a role only if the $[~\verb+match+~\NT{wanted}~]$
part is specified. From the first phase we have some terms, that we
will see as subterm roots, and their context. For each of these
contexts the $\NT{wanted}$ term is disambiguated in it and the
- corresponding root is searched for a subterm $\alpha$-equivalent to
+ corresponding root is searched for a subterm that can be unified to
$\NT{wanted}$. The result of this search is the selection the
pattern represents.
\end{description}
\subsubsection{Examples}
-To explain how the first phase works let's give an example. Consider
-you want to prove the uniqueness of the identity element $0$ for natural
-sum, and that you can rely on the previously demonstrated left
-injectivity of the sum, that is $inj\_plus\_l:\forall x,y,z.x+y=z+y \to x =z$.
-Typing
-\begin{grafite}
-theorem valid_name: \forall n,m. m + n = n \to m = O.
- intros (n m H).
-\end{grafite}
-\noindent
-leads you to the following sequent
+%To explain how the first phase works let us give an example. Consider
+%you want to prove the uniqueness of the identity element $0$ for natural
+%sum, and that you can rely on the previously demonstrated left
+%injectivity of the sum, that is $inj\_plus\_l:\forall x,y,z.x+y=z+y \to x =z$.
+%Typing
+%\begin{grafite}
+%theorem valid_name: \forall n,m. m + n = n \to m = O.
+% intros (n m H).
+%\end{grafite}
+%\noindent
+Consider the following sequent
\sequent{
n:nat\\
m:nat\\
m=O
}
\noindent
-where you want to change the right part of the equivalence of the $H$
-hypothesis with $O + n$ and then use $inj\_plus\_l$ to prove $m=O$.
+To change the right part of the equivalence of the $H$
+hypothesis with $O + n$ the user selects and pastes it as the pattern
+in the following statement.
\begin{grafite}
change in H:(? ? ? %) with (O + n).
\end{grafite}
\noindent
-This pattern, that is a simple instance of the $\NT{sequent\_path}$
-grammar entry, acts on $H$ that has type (without notation) $(eq~nat~(m+n)~n)$
-and discharges the head of the application and the first two arguments with a
-$?$ and selects the last argument with $\%$. The syntax may seem uncomfortable,
-but the user can simply select with the mouse the right part of the equivalence
-and left to the system the burden of writing down in the script file the
-corresponding pattern with $?$ and $\%$ in the right place (that is not
-trivial, expecially where implicit arguments are hidden by the notation, like
-the type $nat$ in this example).
-
-Changing all the occurrences of $n$ in the hypothesis $H$ with $O+n$
-works too and can be done, by the experienced user, writing directly
-a simpler pattern that uses the second phase.
+To understand the pattern (or produce it by hand) the user should be
+aware that the notation $m+n=n$ hides the term $(eq~nat~(m+n)~n)$, so
+that the pattern selects only the third argument of $eq$.
+
+The experienced user may also write by hand a concise pattern
+to change at once all the occcurrences of $n$ in the hypothesis $H$:
\begin{grafite}
change in H match n with (O + n).
\end{grafite}
\noindent
In this case the $\NT{sequent\_path}$ selects the whole $H$, while
-the second phase searches the wanted $n$ inside it by
-$\alpha$-equivalence. The resulting
-equivalence will be $m+(O+n)=O+n$ since the second phase found two
-occurrences of $n$ in $H$ and the tactic changed both.
+the second phase locates $n$.
-Just for completeness the second pattern is equivalent to the
-following one, that is less readable but uses only the first phase.
+The latter pattern is equivalent to the following one, that the system
+can automatically generate from the selection.
\begin{grafite}
change in H:(? ? (? ? %) %) with (O + n).
\end{grafite}
\noindent
\subsubsection{Tactics supporting patterns}
+MERGIARE CON IL SUCCESSIVO FACENDO NOTARE CHE I PATTERNS SONO UNA
+INTERFACCIA OCMUNE PER LE TATTICHE
+
In \MATITA{} all the tactics that can be restricted to subterm of the working
sequent accept the pattern syntax. In particular these tactics are: simplify,
change, fold, unfold, generalize, replace and rewrite.
con una pattern\_of(select(pattern))}
\subsubsection{Comparison with \COQ{}}
-\COQ{} has a two diffrent ways of restricting the application of tactis to
+\COQ{} has a two different ways of restricting the application of tactis to
subterms of the sequent, both relaying on the same special syntax to identify
a term occurrence.
-The first way is to use this special syntax to specify directly to the
-tactic the occurrnces of a wanted term that should be affected, while
-the second is to prepare the sequent with another tactic called
-pattern and the apply the real tactic. Note that the choice is not
+The first way is to use this special syntax to tell the
+tactic what occurrences of a wanted term should be affected.
+The second is to prepare the sequent with another tactic called
+pattern and then apply the real tactic. Note that the choice is not
left to the user, since some tactics needs the sequent to be prepared
with pattern and do not accept directly this special syntax.
\end{grafite}
\noindent
meaning that in the hypothesis $H$ the $n$ we want to change is the
-second we encounter proceeding from left toright.
+second we encounter proceeding from left to right.
The tactic pattern computes a
$\beta$-expansion of a part of the sequent with respect to some
is executed in a single step, while it obviously performs lot of proof
steps. In the fist example of the previous section the whole branch
over the two goals (respectively the left and right part of the logic
-and) result in a single step of execution. The workaround doesn't work
+and) result in a single step of execution. The workaround does not work
anymore unless you de-structure on the fly the proof, putting some
``\texttt{.}'' where you want the system to stop.\\
\end{description}
\section{Standard library}
+\label{sec:stdlib}
\MATITA{} is \COQ{} compatible, in the sense that every theorem of \COQ{}
can be read, checked and referenced in further developments.