+of proofs. The terms need to be maximally unshared (i.e. they must be a tree
+and not a DAG). The reason is that to the occurrences of a subterm in
+two different positions we need to associate different typing informations.
+This association is made easier when the term is represented as a tree since
+it is possible to label each node with an unique identifier and associate
+the typing information using a map on the identifiers.
+The \texttt{cic\_acic} \component{} unshares and annotates terms. It is used
+by the \texttt{library} \component{} since fully specified terms are stored
+in the library in their annotated form.
+
+We do not provide yet a reverse translation from content level proofs to
+partially specified terms. But in \texttt{cic\_disambiguation} we do provide
+the reverse translation for expressions. The mapping from
+content level expressions to partially specified terms is not unique due to
+the ambiguity of the content level. As a consequence the translation
+is guided by an \emph{interpretation}, that is a function that chooses for
+every ambiguous expression one partially specified term. The
+\texttt{cic\_disambiguation} \component{} implements the
+disambiguation algorithm we presented in~\cite{disambiguation} that is
+responsible of building in an efficicent way the set of all ``correct''
+interpretations. An interpretation is correct if the partially specified term
+obtained using the interpretation is refinable.
+
+\subsection{Presentation level terms}
+
+Content level terms are a sort of abstract syntax trees for mathematical
+expressions and proofs. The concrete syntax given to these abstract trees
+is called \emph{presentation level}.
+
+The main important difference between the content level language and the
+presentation level language is that only the former is extendible. Indeed,
+the presentation level language is a finite language that comprises all
+the usual mathematical symbols. Mathematicians invent new notions every
+single day, but they stick to a set of symbols that is more or less fixed.
+
+The fact that the presentation language is finite allows the definition of
+standard languages. In particular, for formulae we have adopt MathML
+Presentation~\cite{mathml} that is an XML dialect standardized by the W3C. To
+visually
+represent proofs it is enough to embed formulae in plain text enriched with
+formatting boxes. Since the language of formatting boxes is very simple,
+many equivalent specifications exist and we have adopted our own, called
+BoxML.
+
+The \texttt{content\_pres} \component{} contains the implementation of the
+translation from content level terms to presentation level terms. The
+rendering of presentation level terms is left to the application that uses
+the \component. However, in the \texttt{hgdome} \component{} we provide a few
+utility functions to build a \GDOME~\cite{gdome2} MathML+BoxML tree from our
+presentation
+level terms. \GDOME{} MathML+BoxML trees can be rendered by the GtkMathView
+widget developed by Luca Padovani \cite{padovani}. The widget is
+particularly interesting since it allows to implement \emph{semantic
+selection}.
+
+Semantic selection is a technique that consists in enriching the presentation
+level terms with pointers to the content level terms and to the partially
+specified terms they correspond to. Highlight of formulae in the widget is
+constrained to selection of meaningful expressions, i.e. expressions that
+correspond to a lower level term, that is a content term or a partially or
+fully specified term.
+Once the rendering of a lower level term is
+selected it is possible for the application to retrieve the pointer to the
+lower level term. An example of applications of semantic selection is
+\emph{semantic cut\&paste}: the user can select an expression and paste it
+elsewhere preserving its semantics (i.e. the partially specified term),
+possibly performing some semantic transformation over it (e.g. renaming
+variables that would be captured or lambda-lifting free variables).
+
+The reverse translation from presentation level terms to content level terms
+is implemented by a parser that is also found in \texttt{content\_pres}.
+Differently from the translation from content level terms to partially
+refined terms, this translation is not ambiguous. The reason is that the
+parsing tool we have adopted (CamlP4) is not able to parse ambiguous
+grammars. Thus we require the mapping from presentation level terms
+(concrete syntax) to content level terms (abstract syntax) to be unique.
+This means that the user must fix once and for all the associativity and
+precedence level of every operator he is using. In practice this limitation
+does not seem too strong. The reason is that the target of the
+translation is an ambiguous language and the user is free to associate
+to every content level term several different interpretations (as a
+partially specified term).
+
+Both the direct and reverse translation from presentation to content level
+terms are parameterized over the user provided mathematical notation.
+The \texttt{lexicon} \component{} is responsible of managing the lexicon,
+that is the set of active notations. It defines an abstract syntax tree
+of commands to declare and activate new notations and it implements the
+semantics of these commands. It also implements undoing of the semantic
+actions. Among the commands there are hints to the
+disambiguation algorithm that are used to control and speed up disambiguation.
+These mechanisms will be further discussed in Sect.~\ref{sec:disambiguation}.
+
+Finally, the \texttt{grafite\_parser} \component{} implements a parser for
+the concrete syntax of the commands of \MATITA. The parser process a stream
+of characters and returns a stream of abstract syntax trees (the ones
+defined by the \texttt{grafite} component and whose semantics is given
+by \texttt{grafite\_engine}). When the parser meets a command that changes
+the lexicon, it invokes the \texttt{lexicon} \component{} to immediately
+process the command. When the parser needs to parse a term at the presentation
+level, it invokes the already described parser for terms contained in
+\texttt{content\_pres}.
+
+The \MATITA{} proof assistant and the \WHELP{} search engine are both linked
+against the \texttt{grafite\_parser} \components{}
+since they provide an interface to the user. In both cases the formulae
+written by the user are parsed using the \texttt{content\_pres} \component{} and
+then disambiguated using the \texttt{cic\_disambiguation} \component.
+However, only \MATITA{} is linked against the \texttt{grafite\_engine} and
+\texttt{tactics} components since \WHELP{} can only execute those ASTs that
+correspond to queries (implemented in the \texttt{whelp} component).
+
+The \UWOBO{} Web service wraps the \texttt{content\_pres} \component,
+providing a rendering service for the documents in the distributed library.
+To render a document given its URI, \UWOBO{} retrieves it using the
+\GETTER{} obtaining a document with fully specified terms. Then it translates
+it to the presentation level passing through the content level. Finally
+it returns the result document to be rendered by the user's
+browser.\footnote{\TODO{manca la passata verso HTML}}
+
+
+The \components{} not yet described (\texttt{extlib}, \texttt{xml},
+\texttt{logger}, \texttt{registry} and \texttt{utf8\_macros}) are
+minor \components{} that provide a core of useful functions and basic
+services missing from the standard library of the programming language.
+In particular, the \texttt{xml} \component{} is used
+to easily represent, parse and pretty-print XML files.
+
+\section{Using \MATITA (boh \ldots cambiare titolo)}
+
+\begin{figure}[t]
+ \begin{center}
+% \includegraphics[width=0.9\textwidth]{a.eps}
+ \caption{\MATITA{} screenshot}
+ \label{fig:screenshot}
+ \end{center}
+\end{figure}
+
+\MATITA{} has a script based user interface. As can be seen in Fig.~... it is
+split in two main windows: on the left a textual widget is used to edit the
+script, on the right the list of open goal is shown using a MathML rendering
+widget. A distinguished part of the script (shaded in the screenshot) represent
+the commands already executed and can't be edited without undoing them. The
+remaining part can be freely edited and commands from that part can be executed
+moving down the execution point. An additional window --- the ``cicBrowser'' ---
+can be used to browse the library, including the proof being developed, and
+enable content based search on it. In the cicBrowser proofs are rendered in
+natural language, automatically generated from the low-level $\lambda$-terms
+using techniques inspired by \cite{natural,YANNTHESIS}.
+
+In the \MATITA{} philosophy the script is not relevant \emph{per se}, but is
+only seen as a convenient way to create mathematical objects. The universe of
+all these objects makes up the \HELM{} library, which is always completely
+visible to the user. The mathematical library is thus conceived as a global
+hypertext, where objects may freely reference each other. It is a duty of
+the system to guide the user through the relevant parts of the library.
+
+This methodological assumption has many important consequences
+which will be discussed in the next section.
+
+%on one side
+%it requires functionalities for the overall management of the library,
+%%%%%comprising efficient indexing techniques to retrieve and filter the
+%information;
+%on the other it introduces overloading in the use of
+%identifiers and mathematical notation, requiring sophisticated disambiguation
+%techniques for interpreting the user inputs.
+%In the next two sections we shall separately discuss the two previous
+%points.
+
+%In order to maximize accessibility mathematical objects are encoded in XML. (As%discussed in the introduction,) the modular architecture of \MATITA{} is
+%organized in components which work on data in this format. For instance the
+%rendering engine, which transform $\lambda$-terms encoded as XML document to
+%MathML Presentation documents, can be used apart from \MATITA{} to print ...
+%FINIRE
+
+A final section is devoted to some innovative aspects
+of the authoring system, such as a step by step tactical execution,
+content selection and copy-paste.
+
+\section{Library Management}
+
+\subsection{Indexing and searching}
+
+
+\subsection{Compilation and decompilation}
+\label{sec:libmanagement}
+
+%
+%goals: consentire sviluppo di una librearia mantenendo integrita' referenziale e usando le teconologie nostre (quindi con metadati, XML, libreria visibile)
+%\subsubsection{Composition}
+%scripts.ma, .moo, XML, metadata
+%\subsubsection{Compilation}
+%analogie con compilazione classica dso.\\
+%granularita' differenti per uso interattivo e non
+%\paragraph{Batch}
+%- granularita' .ma/buri \\
+%-- motivazioni\\
+%- come si calcolano le dipendenze\\
+%- quando la si usa\\
+%- metodi (cc e clean)\\
+%- garanzie
+%\paragraph{Interactive}
+%- granularita' fine\\
+%-- motivazioni
+%\label{sec:libmanagement}
+%consistenza: integrita' referenziale
+%Goals: mantenere consistente la rappresentazione della libreria su memoria persistente consentendo di compilare e decompilare le compilation unit (.ma).\\
+%Vincoli: dipendenze oggetti-oggetti e metadati-oggetti\\
+%Due livelli di gestione libreria, uno e' solo in fase interattiva dove la compilazione e' passo passo: \\
+%--- granularita' oggetto per matita interactive\\
+%--- granularita' baseuri (compilation unit) per la libreria\\
+%In entrmbi i casi ora:\\
+%--- matitaSync: add, remove, timetravel(facility-macro tra 2 stati)[obj]\\
+%--- matitaCleanLib: clean\_baseuri (che poi usa matitaSync a sua volta)[comp1]\\
+%Vincoli di add: typecheck ( ==$>$ tutto quello che usa sta in lib)\\
+%Vincoli di remove: \\
+%--- la remove di mSync non li controlla (ma sa cosa cancellare per ogni uri)\\
+%--- la clean\_baseuri calcola le dipendenze con i metadati (o anche i moo direi) e li rispetta\\
+%Undo di matita garantisce la consistenza a patto che l'history che tiene sia ok\\
+%Undo della lib (mClean) garantisce la consistenza (usando moo o Db).\\
+
+The aim of this section is to describe the way \MATITA{}
+preserves the consistency and the availability of the library
+using the \WHELP{} technology, in response to the user addition or
+removal of mathematical objects.
+
+As already sketched in \ref{fully-spec} the output of the
+compilation of a script is split among two storage media, a
+classical filesystem and a relational database. The former is used to
+store the XML encoding of the objects defined in the script, the
+disambiguation aliases and the interpretation and notational convention defined,
+while the latter is used to store all the metadata needed by
+\WHELP{}.
+% Non serve piu' l'update: by --Zack
+% In addition the \GETTER{} component
+% should be updated with the the new mapping between the logical URI
+% and the physical path of objects.
+
+While this kind of consistency has nothing to do with the nature of
+the content of the library and is thus uninteresting (but really
+tedious to implement and keep bug-free), there is a deeper
+notion of mathematical consistency we need to provide. Each object
+must reference only defined object (i.e. each proof must use only
+already proved theorems).
+
+We will focus on how \MATITA{} ensures the interesting kind
+of consistency during the formalization of a mathematical theory,
+giving the user the freedom of adding, removing, modifying objects
+without loosing the feeling of an always visible and browsable
+library.
+
+\subsubsection{Compilation}
+
+The typechecker component guarantees that if an object is well typed
+it depends only on well typed objects available in the library,
+that is exactly what we need to be sure that the logic consistency of
+the library is preserved. We have only to find the right order of
+compilation of the scripts that compose the user development.
+
+For this purpose we provide a tool called \MATITADEP{}
+that takes in input the list of files that compose the development and
+outputs their dependencies in a format suitable for the GNU \texttt{make} tool.
+The user is not asked to run \MATITADEP{} by hand, but
+simply to tell \MATITA{} the root directory of his development (where all
+script files can be found) and \MATITA{} will handle all the compilation
+related tasks, including dependencies calculation.
+To compute dependencies it is enough to look at the script files for
+inclusions of other parts of the development or for explicit
+references to other objects (i.e. with explicit aliases, see
+\ref{sec:disambaliases}).
+
+The output of the compilation is immediately available to the user
+trough the \WHELP{} technology, since all metadata are stored in a
+user-specific area of the database where the search engine has read
+access, and all the automated tactics that operates on the whole
+library, like \AUTO, have full visibility of the newly defined objects.
+
+Compilation is rather simple, and the only tricky case is when we want
+to compile again the same script, maybe after the removal of a
+theorem. Here the policy is simple: decompile it before recompiling.
+As we will see in the next section decompilation will ensure that
+there will be no theorems in the development that depends on the
+removed items.
+
+\subsubsection{Decompilation}
+
+Decompiling an object involves the (recursive)
+decompilation of all the objects that depend on it.
+
+The calculation of the reverse dependencies can be computed in two
+ways, using the relational database or using a simpler set of metadata
+that \MATITA{} saves in the filesystem as a result of compilation. The
+former technique is the same used by the \emph{Dependency Analyzer}
+described in \cite{zack-master} and really depends on a relational
+database.
+
+The latter is a fall-back in case the database is not available.\footnote{Due to
+the complex deployment of a large piece of software like a database,
+it is a common practice for the \HELM{} team to use a shared remote
+database, that may be unavailable if the user workstation lacks
+network connectivity.} This facility has to be intended only as a fall-back,
+since the queries of the \WHELP{} technology depend require a working database.
+
+Decompilation guarantees that if an object is removed there are no
+dandling references to it, and that the part of the library still
+compiled is logically consistent. Since decompilation involves the
+removal of all the results of the compilation, metadata included, the
+library browsable trough the \WHELP{} technology is always kept up to date.
+
+\subsubsection{Interactive and batch (de)compilation}
+
+\MATITA{} includes an interactive graphical interface and a batch
+compiler (\MATITAC). Only the former is intended to be used directly by the
+user, the latter is automatically invoked when a
+part of the user development is required (for example issuing an
+\texttt{include} command) but not yet compiled.
+
+While they share the same engine for compilation and decompilation,
+they provide different granularity. The batch compiler is only able to
+compile a whole script file and reciprocally to decompile only a whole
+script, and consequently all the other scripts that rely on an object
+defined in it. The interactive interface is able to execute single steps
+of compilation, that may include the definition of an object, and
+consequently to undo single steps, thus removing single objects.
+
+\subsection{Automation}
+
+\subsection{\MATITA's naming convention}
+A minor but not entirely negligible aspect of \MATITA{} is that of
+adopting a (semi)-rigid naming convention for identifiers, derived by
+our studies about metadata for statements.
+The convention is only applied to identifiers for theorems
+(not definitions), and relates the name of a proof to its statement.
+The basic rules are the following:
+\begin{itemize}
+\item each identifier is composed by an ordered list of (short)
+names occurring in a left to right traversal of the statement;
+\item all identifiers should (but this is not strictly compulsory)
+separated by an underscore,
+\item identifiers in two different hypothesis, or in an hypothesis
+and in the conlcusion must be separated by the string ``\verb+_to_+'';
+\item the identifier may be followed by a numerical suffix, or a
+single or duoble apostrophe.
+
+\end{itemize}
+Take for instance the theorem
+\[\forall n:nat. n = plus \; n\; O\]
+Possible legal names are: \verb+plus_n_O+, \verb+plus_O+,
+\verb+eq_n_plus_n_O+ and so on.
+Similarly, consider the theorem
+\[\forall n,m:nat. n<m \to n \leq m\]
+In this case \verb+lt_to_le+ is a legal name,
+while \verb+lt_le+ is not.\\
+But what about, say, the symmetric law of equality? Probably you would like
+to name such a theorem with something explicitly recalling symmetry.
+The correct approach,
+in this case, is the following. You should start with defining the
+symmetric property for relations
+
+\[definition\;symmetric\;= \lambda A:Type.\lambda R.\forall x,y:A.R x y \to R y x \]
+
+Then, you may state the symmetry of equality as
+\[ \forall A:Type. symmetric \;A\;(eq \; A)\]
+and \verb+symmetric_eq+ is valid \MATITA{} name for such a theorem.
+So, somehow unexpectedly, the introduction of semi-rigid naming convention
+has an important benefical effect on the global organization of the library,
+forcing the user to define abstract notions and properties before
+using them (and formalizing such use).