X-Git-Url: http://matita.cs.unibo.it/gitweb/?a=blobdiff_plain;f=helm%2Fpapers%2Fmatita%2Fmatita.tex;h=205333113b9538144de215e338573795b7aec49a;hb=4940c811f19eaafe95a0294358ef2445a4f19bcb;hp=d0817571912d4084148560dd46b05be2dde1804b;hpb=52f3181ce6b17977eaf30007ad44bcaa25db4519;p=helm.git diff --git a/helm/papers/matita/matita.tex b/helm/papers/matita/matita.tex index d08175719..205333113 100644 --- a/helm/papers/matita/matita.tex +++ b/helm/papers/matita/matita.tex @@ -1,10 +1,14 @@ \documentclass[a4paper]{llncs} \pagestyle{headings} +\usepackage{color} \usepackage{graphicx} \usepackage{amssymb,amsmath} \usepackage{hyperref} \usepackage{picins} +\usepackage{color} +\usepackage{fancyvrb} +\definecolor{gray}{gray}{0.85} %\newcommand{\logo}[3]{ %\parpic(0cm,0cm)(#2,#3)[l]{\includegraphics[width=#1]{whelp-bw}} %} @@ -17,6 +21,7 @@ \newcommand{\IN}{\ensuremath{\mathbb{N}}} \newcommand{\INSTANCE}{\textsc{Instance}} \newcommand{\IR}{\ensuremath{\mathbb{R}}} +\newcommand{\IZ}{\ensuremath{\mathbb{Z}}} \newcommand{\LIBXSLT}{LibXSLT} \newcommand{\LOCATE}{\textsc{Locate}} \newcommand{\MATCH}{\textsc{Match}} @@ -25,6 +30,7 @@ \newcommand{\MOWGLI}{MoWGLI} \newcommand{\NAT}{\ensuremath{\mathit{nat}}} \newcommand{\NATIND}{\mathit{nat\_ind}} +\newcommand{\NUPRL}{NuPRL} \newcommand{\OCAML}{OCaml} \newcommand{\PROP}{\mathit{Prop}} \newcommand{\REF}[3]{\ensuremath{\mathit{Ref}_{#1}(#2,#3)}} @@ -32,14 +38,42 @@ \newcommand{\UWOBO}{UWOBO} \newcommand{\WHELP}{Whelp} -\newcommand{\ASSIGNEDTO}[1]{\textbf{Assigned to:} #1} +\definecolor{gray}{gray}{0.85} % 1 -> white; 0 -> black +\newcommand{\NT}[1]{\langle\mathit{#1}\rangle} +\newcommand{\URI}[1]{\texttt{#1}} + +%{\end{SaveVerbatim}\setlength{\fboxrule}{.5mm}\setlength{\fboxsep}{2mm}% +\newenvironment{grafite}{\VerbatimEnvironment + \begin{SaveVerbatim}{boxtmp}}% + {\end{SaveVerbatim}\setlength{\fboxsep}{3mm}% + \begin{center} + \fcolorbox{black}{gray}{\BUseVerbatim[boxwidth=0.9\linewidth]{boxtmp}} + \end{center}} -\title{The proof assistant Matita} +\newcommand{\ASSIGNEDTO}[1]{\textbf{Assigned to:} #1} +\newcommand{\FILE}[1]{\texttt{#1}} +\newcommand{\NOTE}[1]{\marginpar{\scriptsize #1}} +\newcommand{\TODO}[1]{\textbf{TODO: #1}} + +\newsavebox{\tmpxyz} +\newcommand{\sequent}[2]{ + \savebox{\tmpxyz}[0.9\linewidth]{ + \begin{minipage}{0.9\linewidth} + \ensuremath{#1} \\ + \rule{3cm}{0.03cm}\\ + \ensuremath{#2} + \end{minipage}}\setlength{\fboxsep}{3mm}% + \begin{center} + \fcolorbox{black}{gray}{\usebox{\tmpxyz}} + \end{center}} + +\title{The Matita proof assistant} \author{Andrea Asperti, Claudio Sacerdoti Coen, Enrico Tassi and Stefano Zacchiroli} \institute{Department of Computer Science, University of Bologna\\ Mura Anteo Zamboni, 7 --- 40127 Bologna, ITALY\\ \email{$\{$asperti,sacerdot,tassi,zacchiro$\}$@cs.unibo.it}} +\bibliographystyle{plain} \begin{document} \maketitle @@ -49,6 +83,133 @@ \section{Introduction} \label{sec:intro} +{\em Matita} is the proof assistant under development by the \HELM{} team +\cite{mkm-helm} at the University of Bologna, under the direction of +Prof.~Asperti. +The origin of the system goes back to 1999. At the time we were mostly +interested to develop tools and techniques to enhance the accessibility +via web of formal libraries of mathematics. Due to its dimension, the +library of the \COQ{} proof assistant (of the order of 35'000 theorems) +was choosed as a privileged test bench for our work, although experiments +have been also conducted with other systems, and notably with \NUPRL{}. +The work, mostly performed in the framework of the recently concluded +European project IST-33562 \MOWGLI{}~\cite{pechino}, mainly consisted in the +following teps: +\begin{itemize} +\item exporting the information from the internal representation of + \COQ{} to a system and platform independent format. Since XML was at the +time an emerging standard, we naturally adopted this technology, fostering +a content-based architecture for future system, where the documents +of the library were the the main components around which everything else +has to be build; +\item developing indexing and searching techniques supporting semantic + queries to the library; these efforts gave birth to our \WHELP{} +search engine, described in~\cite{whelp}; +\item developing languages and tools for a high-quality notational +rendering of mathematical information; in particular, we have been +active in the MathML Working group since 1999, and developed inside +\HELM{} a MathML-compliant widget for the GTK graphical environment +which can be integrated in any application. +\end{itemize} +The exportation issue, extensively discussed in \cite{exportation-module}, +has several major implications worth to be discussed. + +The first +point concerns the kind of content information to be exported. In a +proof assistant like \COQ{}, proofs are represented in at least three clearly +distinguishable formats: \emph{scripts} (i.e. sequences of commands issued by the +user to the system during an interactive session of proof), \emph{proof objects} +(which is the low-level representation of proofs in the form of +lambda-terms readable to and checked by kernel) and \emph{proof-trees} (which +is a kind of intermediate representation, vaguely inspired by a sequent +like notation, that inherits most of the defects but essentially +none of the advantages of the previous representations). +Partially related to this problem, there is the +issue of the {\em granularity} of the library: scripts usually comprise +small developments with many definitions and theorems, while +proof objects correspond to individual mathematical items. + +In our case, the choice of the content encoding was eventually dictated +by the methodological assumption of offering the information in a +stable and system-independent format. The language of scripts is too +oriented to \COQ, and it changes too rapidly to be of any interest +to third parties. On the other side, the language of proof objects +merely depend on +the logical framework (the Calculus of Inductive Constructions, in +the case of \COQ), is grammatically simple, semantically clear and, +especially, is very stable (as kernels of proof assistants +often are). +So the granularity of the library is at the level of individual +objects, that also justifies from another point of view the need +for efficient searching techniques for retrieving individual +logical items from the repository. + +The main (possibly only) problem with proof objects is that they are +difficult to read and do not directly correspond to what the user typed +in. An analogy frequently made in the proof assistant community is that of +comparing the vernacular language of scripts to a high level source language +and lambda terms to the assembly language they are compiled in. We do not +share this view and prefer to look at scripts as an imperative language, +and to lambda terms as their denotational semantics; still, however, +denotational semantics is possibly more formal but surely not more readable +than the imperative source. + +For all the previous reasons, a huge amount of work inside \MOWGLI{} has +been devoted to automatic reconstruction of proofs in natural language +from lambda terms. Since lambda terms are in close connection +with natural deduction +(that is still the most natural logical language discovered so far) +the work is not hopeless as it may seem, especially if rendering +is combined, as in our case, with dynamic features supporting +in-line expansions or contractions of subproofs. The final +rendering is probably not entirely satisfactory (see \cite{ida} for a +discussion), but surely +readable (the actual quality largely depends by the way the lambda +term is written). + +Summing up, we already disposed of the following tools/techniques: +\begin{itemize} +\item XML specifications for the Calculus of Inductive Constructions, +with tools for parsing and saving mathematical objects in such a format; +\item metadata specifications and tools for indexing and querying the +XML knowledge base; +\item a proof checker (i.e. the {\em kernel} of a proof assistant), + implemented to check that we exported form the \COQ{} library all the +logically relevant content; +\item a sophisticated parser (used by the search engine), able to deal +with potentially ambiguous and incomplete information, typical of the +mathematical notation \cite{}; +\item a {\em refiner}, i.e. a type inference system, based on complex +existential variables, used by the disambiguating parser; +\item complex transformation algorithms for proof rendering in natural +language; +\item an innovative rendering widget, supporting high-quality bidimensional +rendering, and semantic selection, i.e. the possibility to select semantically +meaningful rendering expressions, and to past the respective content into +a different text area. +\NOTE{il widget\\ non ha sel\\ semantica} +\end{itemize} +Starting from all this, the further step of developing our own +proof assistant was too +small and too tempting to be neglected. Essentially, we ``just'' had to +add an authoring interface, and a set of functionalities for the +overall management of the library, integrating everything into a +single system. \MATITA{} is the result of this effort. + +At first sight, \MATITA{} looks as (and partly is) a \COQ{} clone. This is +more the effect of the circumstances of its creation described +above than the result of a deliberate design. In particular, we +(essentially) share the same foundational dialect of \COQ{} (the +Calculus of Inductive Constructions), the same implementative +language (\OCAML{}), and the same (script based) authoring philosophy. +However, as we shall see, the analogy essentially stops here. + +In a sense; we like to think of \MATITA{} as the way \COQ{} would +look like if entirely rewritten from scratch: just to give an +idea, although \MATITA{} currently supports almost all functionalities of +\COQ{}, it links 60'000 lins of \OCAML{} code, against ... of \COQ{} (and +we are convinced that, starting from scratch again, we could furtherly +reduce our code in sensible way).\NOTE{righe\\\COQ{}} \begin{itemize} \item scelta del sistema fondazionale @@ -60,37 +221,550 @@ \end{itemize} \end{itemize} -\textbf{Acknowledgements} -We would like to thank all the students that during the past -five years collaborated in the \HELM{} project and contributed to -the development of Matita, and in particular -A.Griggio, F.Guidi, P. Di Lena, L.Padovani, I.Schena, M.Selmi, -V.Tamburrelli. - \section{Features} \subsection{mathml} \ASSIGNEDTO{zack} \subsection{metavariabili} +\label{sec:metavariables} \ASSIGNEDTO{csc} \subsection{pattern} -\ASSIGNEDTO{gares} +\ASSIGNEDTO{gares}\\ +Patterns are the textual counterpart of the MathML widget graphical +selection. + +Matita benefits of a graphical interface and a powerful MathML rendering +widget that allows the user to select pieces of the sequent he is working +on. While this is an extremely intuitive way for the user to +restrict the application of tactics, for example, to some subterms of the +conclusion or some hypothesis, the way this action is recorded to the text +script is not obvious.\\ +In \MATITA{} this issue is addressed by patterns. + +\subsubsection{Pattern syntax} +A pattern is composed of two terms: a $\NT{sequent\_path}$ and a +$\NT{wanted}$. +The former mocks-up a sequent, discharging unwanted subterms with $?$ and +selecting the interesting parts with the placeholder $\%$. +The latter is a term that lives in the context of the placeholders. + +The concrete syntax is reported in table \ref{tab:pathsyn} +\NOTE{uso nomi diversi \\dalla grammatica \\ ma che hanno + senso} +\begin{table} + \caption{\label{tab:pathsyn} Concrete syntax of \MATITA{} patterns.\strut} +\hrule +\[ +\begin{array}{@{}rcll@{}} + \NT{pattern} & + ::= & [~\verb+in match+~\NT{wanted}~]~[~\verb+in+~\NT{sequent\_path}~] & \\ + \NT{sequent\_path} & + ::= & \{~\NT{ident}~[~\verb+:+~\NT{multipath}~]~\}~ + [~\verb+\vdash+~\NT{multipath}~] & \\ + \NT{wanted} & ::= & \NT{term} & \\ + \NT{multipath} & ::= & \NT{term\_with\_placeholders} & \\ +\end{array} +\] +\hrule +\end{table} + +\subsubsection{How patterns work} +Patterns mimic the user's selection in two steps. The first one +selects roots (subterms) of the sequent, using the +$\NT{sequent\_path}$, while the second +one searches the $\NT{wanted}$ term starting from these roots. Both are +optional steps, and by convention the empty pattern selects the whole +goal. + +\begin{description} +\item[Phase 1] + concerns only the $[~\verb+in+~\NT{sequent\_path}~]$ + part of the syntax. $\NT{ident}$ is an hypothesis name and + selects the assumption where the following optional $\NT{multipath}$ + will operate. \verb+\vdash+ can be considered the name for the goal. + If the whole pattern is omitted, the whole goal will be selected. + If one or more hypotheses names are given the selection is restricted to + these assumptions. If a $\NT{multipath}$ is omitted the whole + assumption is selected. Remember that the user can be mostly + unaware of this syntax, since the system is able to write down a + $\NT{sequent\_path}$ starting from a visual selection. + \NOTE{Questo ancora non va\\in matita} + + A $\NT{multipath}$ is a CiC term in which a special constant $\%$ + is allowed. + The roots of discharged subterms are marked with $?$, while $\%$ + is used to select roots. The default $\NT{multipath}$, the one that + selects the whole term, is simply $\%$. + Valid $\NT{multipath}$ are, for example, $(?~\%~?)$ or $\%~\verb+\to+~(\%~?)$ + that respectively select the first argument of an application or + the source of an arrow and the head of the application that is + found in the arrow target. + + The first phase selects not only terms (roots of subterms) but also + their context that will be eventually used in the second phase. + +\item[Phase 2] + plays a role only if the $[~\verb+in match+~\NT{wanted}~]$ + part is specified. From the first phase we have some terms, that we + will see as subterm roots, and their context. For each of these + contexts the $\NT{wanted}$ term is disambiguated in it and the + corresponding root is searched for a subterm $\alpha$-equivalent to + $\NT{wanted}$. The result of this search is the selection the + pattern represents. + +\end{description} + +\noindent +Since the first step is equipotent to the composition of the two +steps, the system uses it to represent each visual selection. +The second step is only meant for the +experienced user that writes patterns by hand, since it really +helps in writing concise patterns as we will see in the +following examples. + +\subsubsection{Examples} +To explain how the first step works let's give an example. Consider +you want to prove the uniqueness of the identity element $0$ for natural +sum, and that you can relay on the previously demonstrated left +injectivity of the sum, that is $inj\_plus\_l:\forall x,y,z.x+y=z+y \to x =z$. +Typing +\begin{grafite} +theorem valid_name: \forall n,m. m + n = n \to m = O. + intros (n m H). +\end{grafite} +\noindent +leads you to the following sequent +\sequent{ +n:nat\\ +m:nat\\ +H: m + n = n}{ +m=O +} +\noindent +where you want to change the right part of the equivalence of the $H$ +hypothesis with $O + n$ and then use $inj\_plus\_l$ to prove $m=O$. +\begin{grafite} + change in H:(? ? ? %) with (O + n). +\end{grafite} +\noindent +This pattern, that is a simple instance of the $\NT{sequent\_path}$ +grammar entry, acts on $H$ that has type (without notation) $(eq~nat~(m+n)~n)$ +and discharges the head of the application and the first two arguments with a +$?$ and selects the last argument with $\%$. The syntax may seem uncomfortable, +but the user can simply select with the mouse the right part of the equivalence +and left to the system the burden of writing down in the script file the +corresponding pattern with $?$ and $\%$ in the right place (that is not +trivial, expecially where implicit arguments are hidden by the notation, like +the type $nat$ in this example). + +Changing all the occurrences of $n$ in the hypothesis $H$ with $O+n$ +works too and can be done, by the experienced user, writing directly +a simpler pattern that uses the second phase. +\begin{grafite} + change in match n in H with (O + n). +\end{grafite} +\noindent +In this case the $\NT{sequent\_path}$ selects the whole $H$, while +the second phase searches the wanted $n$ inside it by +$\alpha$-equivalence. The resulting +equivalence will be $m+(O+n)=O+n$ since the second phase found two +occurrences of $n$ in $H$ and the tactic changed both. + +Just for completeness the second pattern is equivalent to the +following one, that is less readable but uses only the first phase. +\begin{grafite} + change in H:(? ? (? ? %) %) with (O + n). +\end{grafite} +\noindent + +\subsubsection{Tactics supporting patterns} +In \MATITA{} the following tactics can be restricted to subterms of +the working sequent: simplify, change, fold, unfold, generalize, replace and +rewrite. +\NOTE{attualmente rewrite e \\ fold non supportano \\ phase 2. per +supportarlo\\bisogna far loro trasformare\\il pattern phase1+phase2\\ +in un pattern phase1only\\come faccio nell'ultimo esempio.\\lo si fa +con una pattern\_of(select(pattern))} + +\subsubsection{Comparison with Coq} +Coq has a two diffrent ways of restricting the application of tactis to +subterms of the sequent, both relaying on the same special syntax to identify +a term occurrence. + +The first way is to use this special syntax to specify directly to the +tactic the occurrnces of a wanted term that should be affected, while +the second is to prepare the sequent with another tactic called +pattern and the apply the real tactic. Note that the choice is not +left to the user, since some tactics needs the sequent to be prepared +with pattern and do not accept directly this special syntax. + +The base idea is that to identify a subterm of the sequent we can +write it and say that we want, for example, the third and the fifth +occurce of it (counting from left to right). In our previous example, +to change only the left part of the equivalence, the correct command +is +\begin{grafite} + change n at 2 in H with (O + n) +\end{grafite} +\noindent +meaning that in the hypothesis $H$ the $n$ we want to change is the +second we encounter proceeding from left toright. + +The tactic pattern computes a +$\beta$-expansion of a part of the sequent with respect to some +occurrences of the given term. In the previous example the following +command +\begin{grafite} + pattern n at 2 in H +\end{grafite} +\noindent +would have resulted in this sequent +\begin{grafite} + n : nat + m : nat + H : (fun n0 : nat => m + n = n0) n + ============================ + m = 0 +\end{grafite} +\noindent +where $H$ is $\beta$-expanded over the second $n$ +occurrence. This is a trick to make the unification algorithm ignore +the head of the application (since the unification is essentially +first-order) but normally operate on the arguments. +This works for some tactics, like rewrite and replace, +but for example not for change and other tactics that do not relay on +unification. + +The idea behind this way of identifying subterms in not really far +from the idea behind patterns, but really fails in help using +complex notation, since it relays on the way the user sees the +sequent. Notation can swap arguments, or place them upside-down or +even put them inside a bidimensional matrix. In these cases using the +mouse to select the wanted term is probably the only way to tell the +system exactly what you want to do. + +One of the goals of \MATITA{} is to use modern publishing techiques, and +adopting a method for restricting tactics application domain that discourages +using heavy math notation, would definitively be a bad choice. \subsection{tatticali} \ASSIGNEDTO{gares} -\subsection{disambiguazione} +\subsection{Disambiguation} +\label{sec:disambiguation} \ASSIGNEDTO{zack} +\begin{table} + \caption{\label{tab:termsyn} Concrete syntax of CIC terms: built-in + notation\strut} +\hrule +\[ +\begin{array}{@{}rcll@{}} + \NT{term} & ::= & & \mbox{\bf terms} \\ + & & x & \mbox{(identifier)} \\ + & | & n & \mbox{(number)} \\ + & | & s & \mbox{(symbol)} \\ + & | & \mathrm{URI} & \mbox{(URI)} \\ + & | & \verb+_+ & \mbox{(implicit)}\TODO{sync} \\ + & | & \verb+?+n~[\verb+[+~\{\NT{subst}\}~\verb+]+] & \mbox{(meta)} \\ + & | & \verb+let+~\NT{ptname}~\verb+\def+~\NT{term}~\verb+in+~\NT{term} \\ + & | & \verb+let+~\NT{kind}~\NT{defs}~\verb+in+~\NT{term} \\ + & | & \NT{binder}~\{\NT{ptnames}\}^{+}~\verb+.+~\NT{term} \\ + & | & \NT{term}~\NT{term} & \mbox{(application)} \\ + & | & \verb+Prop+ \mid \verb+Set+ \mid \verb+Type+ \mid \verb+CProp+ & \mbox{(sort)} \\ + & | & \verb+match+~\NT{term}~ & \mbox{(pattern matching)} \\ + & & ~ ~ [\verb+[+~\verb+in+~x~\verb+]+] + ~ [\verb+[+~\verb+return+~\NT{term}~\verb+]+] \\ + & & ~ ~ \verb+with [+~[\NT{rule}~\{\verb+|+~\NT{rule}\}]~\verb+]+ & \\ + & | & \verb+(+~\NT{term}~\verb+:+~\NT{term}~\verb+)+ & \mbox{(cast)} \\ + & | & \verb+(+~\NT{term}~\verb+)+ \\ + \NT{defs} & ::= & & \mbox{\bf mutual definitions} \\ + & & \NT{fun}~\{\verb+and+~\NT{fun}\} \\ + \NT{fun} & ::= & & \mbox{\bf functions} \\ + & & \NT{arg}~\{\NT{ptnames}\}^{+}~[\verb+on+~x]~\verb+\def+~\NT{term} \\ + \NT{binder} & ::= & & \mbox{\bf binders} \\ + & & \verb+\forall+ \mid \verb+\lambda+ \\ + \NT{arg} & ::= & & \mbox{\bf single argument} \\ + & & \verb+_+ \mid x \\ + \NT{ptname} & ::= & & \mbox{\bf possibly typed name} \\ + & & \NT{arg} \\ + & | & \verb+(+~\NT{arg}~\verb+:+~\NT{term}~\verb+)+ \\ + \NT{ptnames} & ::= & & \mbox{\bf bound variables} \\ + & & \NT{arg} \\ + & | & \verb+(+~\NT{arg}~\{\verb+,+~\NT{arg}\}~[\verb+:+~\NT{term}]~\verb+)+ \\ + \NT{kind} & ::= & & \mbox{\bf induction kind} \\ + & & \verb+rec+ \mid \verb+corec+ \\ + \NT{rule} & ::= & & \mbox{\bf rules} \\ + & & x~\{\NT{ptname}\}~\verb+\Rightarrow+~\NT{term} +\end{array} +\] +\hrule +\end{table} + +\subsubsection{Term input} + +The primary form of user interaction employed by \MATITA{} is textual script +editing: the user modifies it and evaluate step by step its composing +\emph{statements}. Examples of statements are inductive type definitions, +theorem declarations, LCF-style tacticals, and macros (e.g. \texttt{Check} can +be used to ask the system to refine a given term and pretty print the result). +Since many statements refer to terms of the underlying calculus, \MATITA{} needs +a concrete syntax able to encode terms of the Calculus of Inductive +Constructions. + +Two of the requirements in the design of such a syntax are apparently in +contrast: +\begin{enumerate} + \item the syntax should be as close as possible to common mathematical practice + and implement widespread mathematical notations; + \item each term described by the syntax should be non-ambiguous meaning that it + should exists a function which associates to it a CIC term. +\end{enumerate} + +These two requirements are addressed in \MATITA{} by the mean of two mechanisms +which work together: \emph{term disambiguation} and \emph{extensible notation}. +Their interaction is visible in the architecture of the \MATITA{} input phase, +depicted in Fig.~\ref{fig:inputphase}. The architecture is articulated as a +pipline of three levels: the concrete syntax level (level 0) is the one the user +has to deal with when inserting CIC terms; the abstract syntax level (level 2) +is an internal representation which intuitively encodes mathematical formulae at +the content level~\cite{adams}\cite{mkm-structure}; the last level is that of +CIC terms. + +\begin{figure}[ht] + \begin{center} + \includegraphics[width=0.9\textwidth]{input_phase} + \caption{\MATITA{} input phase} + \end{center} + \label{fig:inputphase} +\end{figure} + +Requirement (1) is addressed by a built-in concrete syntax for terms, described +in Tab.~\ref{tab:termsyn}, and the extensible notation mechanisms which offers a +way for extending available mathematical notations. Extensible notation, which +is also in charge of providing a parsing function mapping concrete syntax terms +to content level terms, is described in Sect.~\ref{sec:notation}. Requirement +(2) is addressed by the conjunct action of that parsing function and +disambiguation which provides a function from content level terms to CIC terms. + +\subsubsection{Sources of ambiguity} + +The translation from content level terms to CIC terms is not straightforward +because some nodes of the content encoding admit more that one CIC encoding, +invalidating requirement (2). + +\begin{example} + \label{ex:disambiguation} + + Consider the term at the concrete syntax level \texttt{\TEXMACRO{forall} x. x + + ln 1 = x} of Fig.~\ref{fig:inputphase}(a), it can be the type of a lemma the + user may want to prove. Assuming that both \texttt{+} and \texttt{=} are parsed + as infix operators, all the following questions are legitimate and must be + answered before obtaining a CIC term from its content level encoding + (Fig.~\ref{fig:inputphase}(b)): + + \begin{enumerate} + + \item Since \texttt{ln} is an unbound identifier, which CIC constants does it + represent? Many different theorems in the library may share its (rather + short) name \dots + + \item Which kind of number (\IN, \IR, \dots) the \texttt{1} literal stand for? + Which encoding is used in CIC to represent it? E.g., assuming $1\in\IN$, is + it an unary or a binary encoding? + + \item Which kind of equality the ``='' node represents? Is it Leibniz's + polymorhpic equality? Is it a decidable equality over \IN, \IR, \dots? + + \end{enumerate} + +\end{example} + +In \MATITA, three \emph{sources of ambiguity} are admitted for content level +terms: unbound identifiers, literal numbers, and operators. Each instance of +ambiguity sources (ambiguous entity) occuring in a content level term is +associated to a \emph{disambiguation domain}. Intuitively a disambiguation +domain is a set of CIC terms which may be replaced for an ambiguous entity +during disambiguation. Each item of the domain is said to be an +\emph{interpretation} for the ambiguous entity. + +\emph{Unbound identifiers} (question 1) are ambiguous entities since the +namespace of CIC objects is not flat and the same identifier may denote many +ofthem. For example the short name \texttt{plus\_assoc} in the \HELM{} library +is shared by three different theorems stating the associative property of +different additions. This kind of ambiguity is avoidable if the user is willing +to use long names (in form of URIs in the \texttt{cic://} scheme) in the +concrete syntax, with the obvious drawbacks of obtaining long and unreadable +terms. + +Given an unbound identifier, the corresponding disambiguation domain is computed +querying the library for all constants, inductive types, and inductive type +constructors having it as their short name (see the \LOCATE{} query in +Sect.~\ref{sec:metadata}). + +\emph{Literal numbers} (question 2) are ambiguous entities as well since +different kinds of numbers can be encoded in CIC (\IN, \IR, \IZ, \dots) using +different encodings. Considering the restricted example of natural numbers we +can for instance encode them in CIC using inductive datatypes with a number of +constructor equal to the encoding base plus 1, obtaining one encoding for each +base. + +For each possible way of mapping a literal number to a CIC term, \MATITA{} is +aware of a \emph{number intepretation function} which, when applied to the +natural number denoted by the literal\footnote{at the moment only literal +natural number are supported in the concrete syntax} returns a corresponding CIC +term. The disambiguation domain for a given literal number is built applying to +the literal all available number interpretation functions in turn. + +Number interpretation functions can be defined in OCaml or directly using +\TODO{notazione per i numeri}. + +\emph{Operators} (question 3) are intuitively head of applications, as such they +are always applied to a non empty sequence of arguments. Their ambiguity is a +need since it is often the case that some notation is used in an overloaded +fashion to hide the use of different CIC constants which encodes similar +concepts. For example, in the standard library of \MATITA{} the infix \texttt{+} +notation is available building a binary \texttt{Op(+)} node, whose +disambiguation domain may refer to different constants like the addition over +natural numbers \URI{cic:/matita/nat/plus/plus.con} or that over real numbers of +the \COQ{} standard library \URI{cic:/Coq/Reals/Rdefinitions/Rplus.con}. + +For each possible way of mapping an operator application to a CIC term, +\MATITA{} knows an \emph{operator interpretation function} which, when applied +to an operator and its arguments, returns a CIC term. The disambiguation domain +for a given operator is built applying to the operator and its arguments all +available operator interpretation functions in turn. + +Operator interpretation functions could be added using the +\texttt{interpretation} statement. For example, among the first line of the +script \FILE{matita/library/logic/equality.ma} from the \MATITA{} standard +library we read: + +\begin{grafite} +interpretation "leibnitz's equality" + 'eq x y = + (cic:/matita/logic/equality/eq.ind#xpointer(1/1) _ x y). +\end{grafite} + +Evaluating it in \MATITA{} will add an operator interpretation function for the +binary operator \texttt{eq} which expands to the CIC term on the right hand side +of the statement. That CIC term can be written using only built-in concrete +syntax, can contain no ambiguity source; still, it can refer to operator +arguments bound on the left hand side and can contain implicit terms (denoted +with \texttt{\_}) which will be expanded to fresh metavariables. The latter +feature is used in the example above for the first argument of Leibniz's +polymorhpic equality. + +\subsubsection{Disambiguation algorithm} + +\NOTE{assumo\\ + che si sia\\ + gia' parlato\\ + di refine} + + +A \emph{disambiguation algorithm} takes as input a content level term and return +a fully determined CIC term. The key observation on which a disambiguation +algorithm is based is that given a content level term with more than one sources +of ambiguity, not all possible combination of interpretation lead to a typable +CIC term. In the term of Ex.~\ref{ex:disambiguation} for instance the +interpretation of \texttt{ln} as a function from \IR to \IR and the +interpretation of \texttt{1} as the Peano number $1$ can't coexists. The notion +of ``can't coexists'' in the disambiguation of \MATITA{} is inherited from the +refiner described in Sect.~\ref{sec:metavariables}: as long as +$\mathit{refine}(c)\neq\bot$, the combination of interpretation which led to $c$ +can coexists. + +The \emph{naive disambiguation algorithm} takes as input a content level term +$t$ and proceeds as follows: + +\begin{enumerate} + + \item Create disambiguation domains $\{D_i | i\in\mathit{Dom}(t)\}$, where + $\mathit{Dom}(t)$ is the set of ambiguity sources of $t$. Each $D_i$ is a set + of CIC terms and can be built as described above. + + \item Let $\Phi = \{\phi_i | {i\in\mathit{Dom}(t)},\phi_i\in D_i\}$ be an + interpretation for $t$. Given $t$ and an interpretation $\Phi$, a CIC term is + fully determined. Iterate over all possible interpretations of $t$ and refine + the corresponding CIC terms, keep only interpretations which lead to CIC terms + $c$ s.t. $\mathit{refine}(c)\neq\bot$ (i.e. interpretations that determine + typable terms). + + \item Let $n$ be the number of interpretations who survived step 2. If $n=0$ + signal a type error. If $n=1$ we have found exactly one CIC term corresponding + to $t$, returns it as output of the disambiguation phase. If $n>1$ we have + found many different CIC terms which can correspond to the content level term, + let the user choose one of the $n$ interpretations and returns the + corresponding term. + +\end{enumerate} + +The above algorithm is highly inefficient since the number of possible +interpretations $\Phi$ grows exponentially with the number of ambiguity sources. +The actual algorithm used in \MATITA{} is far more efficient being, in the +average case, linear in the number of ambiguity sources. + +\TODO{FINQUI} + +The efficient algorithm can be applied if the logic can be extended with +metavariables and a refiner can be implemented. This is the case for CIC and +several other logics. +\emph{Metavariables}~\cite{munoz} are typed, non linear placeholders that can +occur in terms; $?_i$ usually denotes the $i$-th metavariable, while $?$ denotes +a freshly created metavariable. A \emph{refiner}~\cite{McBride} is a +function whose input is a term with placeholders and whose output is either a +new term obtained instantiating some placeholder or $\epsilon$, meaning that no +well typed instantiation could be found for the placeholders occurring in +the term (type error). + +The efficient algorithm starts with an interpretation $\Phi_0 = \{\phi_i | +\phi_i = ?, i\in\mathit{Dom}(t)\}$, +which associates a fresh metavariable to each +source of ambiguity. Then it iterates refining the current CIC term (i.e. the +term obtained interpreting $t$ with $\Phi_i$). If the refinement succeeds the +next interpretation $\Phi_{i+1}$ will be created \emph{making a choice}, that is +replacing a placeholder with one of the possible choice from the corresponding +disambiguation domain. The placeholder to be replaced is chosen following a +preorder visit of the ambiguous term. If the refinement fails the current set of +choices cannot lead to a well-typed term and backtracking is attempted. +Once an unambiguous correct interpretation is found (i.e. $\Phi_i$ does no +longer contain any placeholder), backtracking is attempted +anyway to find the other correct interpretations. + +The intuition which explain why this algorithm is more efficient is that as soon +as a term containing placeholders is not typable, no further instantiation of +its placeholders could lead to a typable term. For example, during the +disambiguation of user input \texttt{\TEXMACRO{forall} x. x*0 = 0}, an +interpretation $\Phi_i$ is encountered which associates $?$ to the instance +of \texttt{0} on the right, the real number $0$ to the instance of \texttt{0} on +the left, and the multiplication over natural numbers (\texttt{mult} for short) +to \texttt{*}. The refiner will fail, since \texttt{mult} require a natural +argument, and no further instantiation of the placeholder will be tried. + +If, at the end of the disambiguation, more than one possible interpretations are +possible, the user will be asked to choose the intended one (see +Fig.~\ref{fig:disambiguation}). + +\begin{figure}[htb] +% \centerline{\includegraphics[width=0.9\textwidth]{disambiguation-1}} + \caption{\label{fig:disambiguation} Disambiguation: interpretation choice} +\end{figure} + +Details of the disambiguation algorithm of \WHELP{} can +be found in~\cite{disambiguation}, where an equivalent algorithm +that avoids backtracking is also presented. + \subsection{notazione} +\label{sec:notation} \ASSIGNEDTO{zack} \subsection{libreria tutta visibile} \ASSIGNEDTO{csc} \subsection{ricerca e indicizzazione} +\label{sec:metadata} \ASSIGNEDTO{andrea} \subsection{auto} @@ -125,39 +799,14 @@ V.Tamburrelli. \subsection{localizzazione errori} \ASSIGNEDTO{} -\begin{thebibliography}{} - - \bibitem{annals} A.~Asperti, F.~Guidi, L.~Padovani, C.~Sacerdoti Coen, - I.~Schena. \emph{Mathematical Knowledge Management in HELM}. Annals of - Mathematics and Artificial Intelligence, 38(1): 27--46; May 2003. - - \bibitem{metadata2} A. Asperti, M. Selmi. \emph{Efficient Retrieval of - Mathematical Statements}. In Proceeding of the Third International Conference - on Mathematical Knowledge Management, MKM 2004. Bialowieza, Poland. LNCS 3119. - \bibitem{pechino} A.Asperti, B.Wegner. \emph{An Approach to - Machine-Understandable Representation of the Mathematical Information in - Digital Documents}. In: Fengshai Bai and Bernd Wegner (eds.): Electronic - Information and Communication in Mathematics, LNCS vol. 2730, - pp. 14--23, 2003 - - \bibitem{coq} The Coq proof-assistant, \url{http://coq.inria.fr} - - \bibitem{metadata1} F. Guidi, C. Sacerdoti Coen. \emph{Querying Distributed - Digital Libraries of Mathematics}. In Proceedings of Calculemus 2003, 11th - Symposium on the Integration of Symbolic Computation and Mechanized - Reasoning. Aracne Editrice. - - \bibitem{exportation} C. Sacerdoti Coen. \emph{From Proof-Assistans to - Distributed Libraries of Mathematics: Tips and Pitfalls}. - In Proc. Mathematical Knowledge Management 2003, Lecture Notes in Computer - Science, Vol. 2594, pp. 30--44, Springer-Verlag. - - \bibitem{disambiguation} C. Sacerdoti Coen, S. Zacchiroli. \emph{Efficient - Ambiguous Parsing of Mathematical Formulae}. In Proceedings of the Third - International Conference on Mathematical Knowledge Management, MKM 2004. - LNCS,3119. - -\end{thebibliography} +\textbf{Acknowledgements} +We would like to thank all the students that during the past +five years collaborated in the \HELM{} project and contributed to +the development of Matita, and in particular +A.Griggio, F.Guidi, P. Di Lena, L.Padovani, I.Schena, M.Selmi, +V.Tamburrelli. + +\bibliography{matita} \end{document}