+\section{\HELM{} library(??)}
+
+\subsection{libreria tutta visibile}
+\ASSIGNEDTO{csc}
+\NOTE{assumo che si sia gia' parlato di approccio content-centrico}
+Our commitment to the content-centric view of the architecture of the system
+has important consequences on the user's experience and on the functionalities
+of several components of \MATITA. In the content-centric view the library
+of mathematical knowledge is an already existent and completely autonomous
+entity that we are allowed to exploit and augment using \MATITA. Thus, in
+principle, when the user starts to prove a new theorem she has complete
+visibility of the library and she can refer to every definition and lemma,
+also using the mathematical notation already developed. In a similar way,
+every form of automation of the system must be able to analyze and possibly
+exploit every notion in the library.
+
+The benefits of this approach highly justify the non neglectable price to pay
+in the development of several components. We analyse now a few of the causes
+of this additional complexity.
+
+\subsubsection{Ambiguity}
+A rich mathematical library includes equivalent definitions and representations
+of the same notion. Moreover, mathematical notation inside a rich library is
+surely highly overloaded. As a consequence every mathematical expression the
+user provides is highly ambiguous since all the definitions,
+representations and special notations are available at once to the user.
+
+The usual solution to the problem, as adopted for instance in Coq, is to
+restrict the user's scope to just one interpretation for each definition,
+representation or notation. In this way much of the ambiguity is removed,
+burdening the user that must someway declare what is in scope and that must
+use special syntax when she needs to refer to something not in scope.
+
+Even with this approach ambiguity cannot be completely removed since implicit
+coercions can be arbitrarily inserted by the system to ``change the type''
+of subterms that do not have the expected type. Usually implicit coercions
+are used to overcome the absence of subtyping that should mimic the subset
+relation found in set theory. For instance, the expression
+$\forall n \in nat. 2 * n * \pi \equiv_\pi 0$ is correct in set theory since
+the set of natural numbers is a subset of that of real numbers; the
+corresponding expression $\forall n:nat. 2*n*\pi \equiv_\pi 0$ is not well typed
+and requires the automatic insertion of the coercion $real_of_nat: nat \to R$
+either around both 2 and $n$ (to make both products be on real numbers) or
+around the product $2*n$. The usual approach consists in either rejecting the
+ambiguous term or arbitrarily choosing one of the interpretations. For instance,
+Coq rejects the declaration of coercions that have alternatives
+(i.e. already declared coercions with the same domain and codomain)
+or that are obtained composing other coercions in order to
+avoid making several terms highly ambiguous by choosing to insert any one of the
+alternative coercions. Coq also arbitrarily chooses how to insert coercions in
+terms to make them well typed when there is more than one possibility (as in
+the previous example).
+
+The approach we are following is radically different. It consists in dealing
+with ambiguous expressions instead of avoiding them. As a last resource,
+when the system is unable to disambiguate the input, the user is interactively
+required to provide more information that is recorded to avoid asking the
+same question again in subsequent processing of the same input.
+More details on our approach can be found in \ref{sec:disambiguation}.
+
+\subsubsection{Consistency}
+A large mathematical library is likely to be logically inconsistent.
+It may contain incompatible axioms or alternative conjectures and it may
+even use definitions in incompatible ways. To clarify this last point,
+consider two identical definitions of a set of elements of a given type
+(or of a category of objects of a given type). Let us call the two definitions
+$A-Set$ and $B-Set$ (or $A-Category$ and $B-Category$).
+It is perfectly legitimate to either form the $A-Set$ of every $B-Set$
+or the $B-Set$ of every $A-Set$ (the same for categories). This just corresponds
+to assuming that a $B-Set$ (respectively an $A-Set$) is a small set, whereas
+an $A-Set$ (respectively a $B-Set$) is a big set (possibly of small sets).
+However, if one part of the library assumes $A-Set$s to be the small ones
+and another part of the library assumes $B-Set$s to be the small ones, the
+library as a whole will be logically inconsistent.
+
+Logical inconsistency has never been a problem in the daily work of a
+mathematician. The mathematician simply imposes himself a discipline to
+restrict himself to consistent subsets of the mathematical knowledge.
+However, in doing so he does not choose the subset in advance by forgetting
+the rest of his knowledge. On the contrary he may proceed with a sort of
+top-down strategy: he may always inspect or use part of his knowledge, but
+when he actually does so he should check recursively that inconsistencies are
+not exploited.
+
+Contrarily to the mathematical practice, the usual tendency in the world of
+assisted automation is that of building a logical environment (a consistent
+subset of the library) in a bottom up way, checking the consistency of a
+new axiom or theorem as soon as it is added to the environment. No lemma
+or definition outside the environment can be used until it is added to the
+library after every notion it depends on. Moreover, very often the logical
+environment is the only part of the library that can be inspected,
+that we can search lemmas in and that can be exploited by automatic tactics.
+
+Moving one by one notions from the library to the environment is a costly
+operation since it involves re-checking the correctness of the notion.
+As a consequence mathematical notions are packages into theories that must
+be added to the environment as a whole. However, the consistency problem is
+only raised at the level of theories: theories must be imported in a bottom
+up way and the system must check that no inconsistency arises.
+
+The practice of limiting the scope on the library to the logical environment
+is contrary to our commitment of being able to fully exploit as much as possible
+of the library at any given time. To reconcile consistency and visibility
+we have departed from the traditional implementation of an environment
+allowing environments to be built on demand in a top-down way. The new
+implementation is based on a modified meta-theory that changes the way
+convertibility, type checking, unification and refinement judgements.
+The modified meta-theory is fully described in \cite{libraryenvironments}.
+Here we just remark how a strong commitment on the way the user interacts
+with the library has lead to modifications to the logical core of the proof
+assistant. This is evidence that breakthroughs in the user interfaces
+and in the way the user interacts with the tools and with the library could
+be achieved only by means of strong architectural changes.
+
+\subsubsection{Accessibility}
+A large library that is completely in scope needs effective indexing and
+searching methods to make the user productive. Libraries of formal results
+are particularly critical since they hold a large percentage of technical
+lemmas that do not have a significative name and that must be retrieved
+using advanced methods based on matching, unification, generalization and
+instantiation.
+
+The efficiency of searching inside the library becomes a critical operation
+when automatic tactics exploit the library during the proof search. In this
+scenario the tactics must retrieve a set of candidates for backward or
+forward reasoning in a few milliseconds.
+
+The choice of several proof assistants is to use ad-hoc data structures,
+such as context trees, to index all the terms currently in scope. These
+data structures are expecially designed to quickly retrieve terms up
+to matching, instantiation and generalization. All these data structures
+try to maximize sharing of identical subterms so that matching can be
+reduced to a visit of the tree (or dag) that holds all the maximally shared
+terms together.
+
+Since the terms to be retrieved (or at least their initial prefix)
+are stored (actually ``melted'') in the data structure, these data structures
+must collect all the terms in a single location. In other words, adopting
+such data structures means centralizing the library.
+
+In the \MOWGLI{} project we have tried to follow an alternative approach
+that consists in keeping the library fully distributed and indexing it
+by means of spiders that collect metadata and store them in a database.
+The challenge is to be able to collect only a smaller as possible number
+of metadata that provide enough information to approximate the matching
+operation. A matching operation is then performed in two steps. The first
+step is a query to the remote search engine that stores the metadata in
+order to detect a (hopefully small) complete set of candidates that could
+match. Completeness means that no term that matches should be excluded from
+the set of candiates. The second step consists in retrieving from the
+distributed library all the candidates and attempt the actual matching.
+
+In the last we years we have progressively improved this technique.
+Our achievements can be found in \cite{query1,query2,query3}.
+
+The technique and tools already developed have been integrated in \MATITA{},
+that is able to contact a remote \WHELP{} search engine \cite{whelp} or that
+can be directly linked to the code of the \WHELP. In either case the database
+used to store the metadata can be local or remote.
+
+Our current challenge consists in the exploitation of \WHELP{} inside of
+\MATITA. In particular we are developing a set of tactics, for instance
+based on paramodulation \cite{paramodulation}, that perform queries to \WHELP{}
+to restrict the scope on the library to a set of interesting candidates,
+greatly reducing the search space. Moreover, queries to \WHELP{} are performed
+during parsing of user provided terms to disambiguate them.
+
+In Sect.~\ref{sec:metadata} we describe the technique adopted in \MATITA.
+
+\subsubsection{Library management}
+
+
+\subsection{ricerca e indicizzazione}
+\label{sec:metadata}
+\ASSIGNEDTO{andrea}
+
+\subsection{auto}
+\ASSIGNEDTO{andrea}
+
+\subsection{sostituzioni esplicite vs moduli}
+\ASSIGNEDTO{csc}
+
+\subsection{xml / gestione della libreria}
+\ASSIGNEDTO{gares}
+
+
+\section{User Interface (da cambiare)}
+
+\subsection{assenza di proof tree / resa in linguaggio naturale}
+\ASSIGNEDTO{andrea}
+
+\subsection{Disambiguation}
+\label{sec:disambiguation}
+\ASSIGNEDTO{zack}
+
+ \begin{table}
+ \caption{\label{tab:termsyn} Concrete syntax of CIC terms: built-in
+ notation\strut}
+ \hrule
+ \[
+ \begin{array}{@{}rcll@{}}
+ \NT{term} & ::= & & \mbox{\bf terms} \\
+ & & x & \mbox{(identifier)} \\
+ & | & n & \mbox{(number)} \\
+ & | & s & \mbox{(symbol)} \\
+ & | & \mathrm{URI} & \mbox{(URI)} \\
+ & | & \verb+_+ & \mbox{(implicit)}\TODO{sync} \\
+ & | & \verb+?+n~[\verb+[+~\{\NT{subst}\}~\verb+]+] & \mbox{(meta)} \\
+ & | & \verb+let+~\NT{ptname}~\verb+\def+~\NT{term}~\verb+in+~\NT{term} \\
+ & | & \verb+let+~\NT{kind}~\NT{defs}~\verb+in+~\NT{term} \\
+ & | & \NT{binder}~\{\NT{ptnames}\}^{+}~\verb+.+~\NT{term} \\
+ & | & \NT{term}~\NT{term} & \mbox{(application)} \\
+ & | & \verb+Prop+ \mid \verb+Set+ \mid \verb+Type+ \mid \verb+CProp+ & \mbox{(sort)} \\
+ & | & \verb+match+~\NT{term}~ & \mbox{(pattern matching)} \\
+ & & ~ ~ [\verb+[+~\verb+in+~x~\verb+]+]
+ ~ [\verb+[+~\verb+return+~\NT{term}~\verb+]+] \\
+ & & ~ ~ \verb+with [+~[\NT{rule}~\{\verb+|+~\NT{rule}\}]~\verb+]+ & \\
+ & | & \verb+(+~\NT{term}~\verb+:+~\NT{term}~\verb+)+ & \mbox{(cast)} \\
+ & | & \verb+(+~\NT{term}~\verb+)+ \\
+ \NT{defs} & ::= & & \mbox{\bf mutual definitions} \\
+ & & \NT{fun}~\{\verb+and+~\NT{fun}\} \\
+ \NT{fun} & ::= & & \mbox{\bf functions} \\
+ & & \NT{arg}~\{\NT{ptnames}\}^{+}~[\verb+on+~x]~\verb+\def+~\NT{term} \\
+ \NT{binder} & ::= & & \mbox{\bf binders} \\
+ & & \verb+\forall+ \mid \verb+\lambda+ \\
+ \NT{arg} & ::= & & \mbox{\bf single argument} \\
+ & & \verb+_+ \mid x \\
+ \NT{ptname} & ::= & & \mbox{\bf possibly typed name} \\
+ & & \NT{arg} \\
+ & | & \verb+(+~\NT{arg}~\verb+:+~\NT{term}~\verb+)+ \\
+ \NT{ptnames} & ::= & & \mbox{\bf bound variables} \\
+ & & \NT{arg} \\
+ & | & \verb+(+~\NT{arg}~\{\verb+,+~\NT{arg}\}~[\verb+:+~\NT{term}]~\verb+)+ \\
+ \NT{kind} & ::= & & \mbox{\bf induction kind} \\
+ & & \verb+rec+ \mid \verb+corec+ \\
+ \NT{rule} & ::= & & \mbox{\bf rules} \\
+ & & x~\{\NT{ptname}\}~\verb+\Rightarrow+~\NT{term}
+ \end{array}
+ \]
+ \hrule
+ \end{table}
+
+
+\subsubsection{Term input}
+
+The primary form of user interaction employed by \MATITA{} is textual script
+editing: the user modifies it and evaluate step by step its composing
+\emph{statements}. Examples of statements are inductive type definitions,
+theorem declarations, LCF-style tacticals, and macros (e.g. \texttt{Check} can
+be used to ask the system to refine a given term and pretty print the result).
+Since many statements refer to terms of the underlying calculus, \MATITA{} needs
+a concrete syntax able to encode terms of the Calculus of Inductive
+Constructions.
+
+Two of the requirements in the design of such a syntax are apparently in
+contrast:
+\begin{enumerate}
+ \item the syntax should be as close as possible to common mathematical practice
+ and implement widespread mathematical notations;
+ \item each term described by the syntax should be non-ambiguous meaning that it
+ should exists a function which associates to it a CIC term.
+\end{enumerate}
+
+These two requirements are addressed in \MATITA{} by the mean of two mechanisms
+which work together: \emph{term disambiguation} and \emph{extensible notation}.
+Their interaction is visible in the architecture of the \MATITA{} input phase,
+depicted in Fig.~\ref{fig:inputphase}. The architecture is articulated as a
+pipline of three levels: the concrete syntax level (level 0) is the one the user
+has to deal with when inserting CIC terms; the abstract syntax level (level 2)
+is an internal representation which intuitively encodes mathematical formulae at
+the content level~\cite{adams}\cite{mkm-structure}; the last level is that of
+CIC terms.
+
+\begin{figure}[ht]
+ \begin{center}
+ \includegraphics[width=0.9\textwidth]{input_phase}
+ \caption{\MATITA{} input phase}
+ \end{center}
+ \label{fig:inputphase}
+\end{figure}
+
+Requirement (1) is addressed by a built-in concrete syntax for terms, described
+in Tab.~\ref{tab:termsyn}, and the extensible notation mechanisms which offers a
+way for extending available mathematical notations. Extensible notation, which
+is also in charge of providing a parsing function mapping concrete syntax terms
+to content level terms, is described in Sect.~\ref{sec:notation}. Requirement
+(2) is addressed by the conjunct action of that parsing function and
+disambiguation which provides a function from content level terms to CIC terms.
+
+\subsubsection{Sources of ambiguity}
+
+The translation from content level terms to CIC terms is not straightforward
+because some nodes of the content encoding admit more that one CIC encoding,
+invalidating requirement (2).
+
+\begin{example}
+ \label{ex:disambiguation}
+
+ Consider the term at the concrete syntax level \texttt{\TEXMACRO{forall} x. x +
+ ln 1 = x} of Fig.~\ref{fig:inputphase}(a), it can be the type of a lemma the
+ user may want to prove. Assuming that both \texttt{+} and \texttt{=} are parsed
+ as infix operators, all the following questions are legitimate and must be
+ answered before obtaining a CIC term from its content level encoding
+ (Fig.~\ref{fig:inputphase}(b)):
+
+ \begin{enumerate}
+
+ \item Since \texttt{ln} is an unbound identifier, which CIC constants does it
+ represent? Many different theorems in the library may share its (rather
+ short) name \dots
+
+ \item Which kind of number (\IN, \IR, \dots) the \texttt{1} literal stand for?
+ Which encoding is used in CIC to represent it? E.g., assuming $1\in\IN$, is
+ it an unary or a binary encoding?
+
+ \item Which kind of equality the ``='' node represents? Is it Leibniz's
+ polymorhpic equality? Is it a decidable equality over \IN, \IR, \dots?
+
+ \end{enumerate}
+
+\end{example}
+
+In \MATITA, three \emph{sources of ambiguity} are admitted for content level
+terms: unbound identifiers, literal numbers, and operators. Each instance of
+ambiguity sources (ambiguous entity) occuring in a content level term is
+associated to a \emph{disambiguation domain}. Intuitively a disambiguation
+domain is a set of CIC terms which may be replaced for an ambiguous entity
+during disambiguation. Each item of the domain is said to be an
+\emph{interpretation} for the ambiguous entity.
+
+\emph{Unbound identifiers} (question 1) are ambiguous entities since the
+namespace of CIC objects is not flat and the same identifier may denote many
+ofthem. For example the short name \texttt{plus\_assoc} in the \HELM{} library
+is shared by three different theorems stating the associative property of
+different additions. This kind of ambiguity is avoidable if the user is willing
+to use long names (in form of URIs in the \texttt{cic://} scheme) in the
+concrete syntax, with the obvious drawbacks of obtaining long and unreadable
+terms.
+
+Given an unbound identifier, the corresponding disambiguation domain is computed
+querying the library for all constants, inductive types, and inductive type
+constructors having it as their short name (see the \LOCATE{} query in
+Sect.~\ref{sec:metadata}).
+
+\emph{Literal numbers} (question 2) are ambiguous entities as well since
+different kinds of numbers can be encoded in CIC (\IN, \IR, \IZ, \dots) using
+different encodings. Considering the restricted example of natural numbers we
+can for instance encode them in CIC using inductive datatypes with a number of
+constructor equal to the encoding base plus 1, obtaining one encoding for each
+base.
+
+For each possible way of mapping a literal number to a CIC term, \MATITA{} is
+aware of a \emph{number intepretation function} which, when applied to the
+natural number denoted by the literal\footnote{at the moment only literal
+natural number are supported in the concrete syntax} returns a corresponding CIC
+term. The disambiguation domain for a given literal number is built applying to
+the literal all available number interpretation functions in turn.
+
+Number interpretation functions can be defined in OCaml or directly using
+\TODO{notazione per i numeri}.
+
+\emph{Operators} (question 3) are intuitively head of applications, as such they
+are always applied to a (possiblt empty) sequence of arguments. Their ambiguity
+is a need since it is often the case that some notation is used in an overloaded
+fashion to hide the use of different CIC constants which encodes similar
+concepts. For example, in the standard library of \MATITA{} the infix \texttt{+}
+notation is available building a binary \texttt{Op(+)} node, whose
+disambiguation domain may refer to different constants like the addition over
+natural numbers \URI{cic:/matita/nat/plus/plus.con} or that over real numbers of
+the \COQ{} standard library \URI{cic:/Coq/Reals/Rdefinitions/Rplus.con}.
+
+For each possible way of mapping an operator application to a CIC term,
+\MATITA{} knows an \emph{operator interpretation function} which, when applied
+to an operator and its arguments, returns a CIC term. The disambiguation domain
+for a given operator is built applying to the operator and its arguments all
+available operator interpretation functions in turn.
+
+Operator interpretation functions could be added using the
+\texttt{interpretation} statement. For example, among the first line of the
+script \FILE{matita/library/logic/equality.ma} from the \MATITA{} standard
+library we read:
+
+\begin{grafite}
+interpretation "leibnitz's equality"
+ 'eq x y =
+ (cic:/matita/logic/equality/eq.ind#xpointer(1/1) _ x y).
+\end{grafite}
+
+Evaluating it in \MATITA{} will add an operator interpretation function for the
+binary operator \texttt{eq} which expands to the CIC term on the right hand side
+of the statement. That CIC term can be written using only built-in concrete
+syntax, can contain no ambiguity source; still, it can refer to operator
+arguments bound on the left hand side and can contain implicit terms (denoted
+with \texttt{\_}) which will be expanded to fresh metavariables. The latter
+feature is used in the example above for the first argument of Leibniz's
+polymorhpic equality.
+
+\subsubsection{Disambiguation algorithm}
+
+A \emph{disambiguation algorithm} takes as input a content level term and return
+a fully determined CIC term. The key observation on which a disambiguation
+algorithm is based is that given a content level term with more than one sources
+of ambiguity, not all possible combination of interpretation lead to a typable
+CIC term. In the term of Ex.~\ref{ex:disambiguation} for instance the
+interpretation of \texttt{ln} as a function from \IR to \IR and the
+interpretation of \texttt{1} as the Peano number $1$ can't coexists. The notion
+of ``can't coexists'' in the disambiguation of \MATITA{} is defined on top of
+the \emph{refiner} for CIC terms described in~\cite{csc-phd}.
+
+Briefly, a refiner is a function whose input is an \emph{incomplete CIC term}
+$t_1$ --- i.e. a term where metavariables occur (Sect.~\ref{sec:metavariables}
+--- and whose output is either:\NOTE{descrizione sommaria del refiner, pu\'o
+essere spostata altrove}
+
+\begin{enumerate}
+
+ \item an incomplete CIC term $t_2$ where $t_2$ is a well-typed term obtained
+ assigning a type to each metavariable in $t_1$ (in case of dependent types,
+ instantiation of some of the metavariable occurring in $t_1$ may occur as
+ well);
+
+ \item $\epsilon$, meaning that no well-typed term could be obtained via
+ assignment of type to metavariable in $t_1$ and their instantiation;
+
+ \item $\bot$, meaning that the refiner is unable to decide whether of the two
+ cases above apply (refinement is semi-decidable).
+
+\end{enumerate}
+
+On top of a CIC refiner \MATITA{} implement an efficient disambiguation
+algorithm, which is outlined below. It takes as input a content level term $c$
+and proceeds as follows:
+
+\begin{enumerate}
+
+ \item Create disambiguation domains $\{D_i | i\in\mathit{Dom}(c)\}$, where
+ $\mathit{Dom}(c)$ is the set of ambiguity sources of $c$. Each $D_i$ is a set
+ of CIC terms and can be built as described above.
+
+ \item An \emph{interpretation} $\Phi$ for $c$ is a map associating an
+ incomplete CIC term to each ambiguity source of $c$. Given $c$ and one of its
+ interpretations an incomplete CIC term is fully determined replacing each
+ ambiguity source of $c$ with its mapping in the interpretation and injecting
+ the remaining structure of the content level in the CIC level (e.g. replacing
+ the application of the content level with the application of the CIC level).
+ This operation is informally called ``interpreting $c$ with $\Phi$''.
+
+ Create an initial interpretation $\Phi_0 = \{\phi_i | \phi_i = \_,
+ i\in\mathit{Dom}(c)\}$, which associates a fresh metavariable to each source
+ of ambiguity of $c$. During this step, implicit terms are expanded to fresh
+ metavariables as well.
+
+ \item Refine the current incomplete CIC term (i.e. the term obtained
+ interpreting $t$ with $\Phi_i$).
+
+ If the refinement succeeds or is undetermined the next interpretation
+ $\Phi_{i+1}$ will be created \emph{making a choice}, that is replacing in the
+ current interpretation one of the metavariable appearing in $\Phi_i$ with one
+ of the possible choice from the corresponding disambiguation domain. The
+ metavariable to be replaced is chosen following a preorder visit of the
+ ambiguous term. Then, step 3 is attempted again with the new interpretation.
+
+ If the refinement fails the current set of choices cannot lead to a well-typed
+ term and backtracking of the current interpretation is attempted.
+
+ \item Once an unambiguous correct interpretation is found (i.e. $\Phi_i$ does
+ no longer contain any placeholder), backtracking is attempted anyway to find
+ the other correct interpretations.
+
+ \item Let $n$ be the number of interpretations who survived step 4. If $n=0$
+ signal a type error. If $n=1$ we have found exactly one (incomplete) CIC term
+ corresponding to the content level term $c$, returns it as output of the
+ disambiguation phase. If $n>1$ we have found many different (incomplete) CIC
+ terms which can correspond to the content level term, let the user choose one
+ of the $n$ interpretations and returns the corresponding term.
+
+\end{enumerate}
+
+The efficiency of this algorithm resides in the fact that as soon as an
+incomplete CIC term is not typable, no further instantiation of the
+metavariables of the corresponding interpretation is attemped.
+% For example, during the disambiguation of the user input
+% \texttt{\TEXMACRO{forall} x. x*0 = 0}, an interpretation $\Phi_i$ is
+% encountered which associates $?$ to the instance of \texttt{0} on the right,
+% the real number $0$ to the instance of \texttt{0} on the left, and the
+% multiplication over natural numbers (\texttt{mult} for short) to \texttt{*}.
+% The refiner will fail, since \texttt{mult} require a natural argument, and no
+% further instantiation of the placeholder will be tried.
+
+Details of the disambiguation algorithm along with an analysis of its complexity
+can be found in~\cite{disambiguation}, where a formulation without backtracking
+(corresponding to the actual \MATITA{} implementation) is also presented.
+
+\subsubsection{Disambiguation stages}
+
+\subsection{notation}
+\label{sec:notation}
+\ASSIGNEDTO{zack}
+
+Mathematical notation plays a fundamental role in mathematical practice: it
+helps expressing in a concise symbolic fashion concepts of arbitrary complexity.
+Its use in proof assistants like \MATITA{} is no exception. Formal mathematics
+indeed often impose to encode mathematical concepts at a very high level of
+details (e.g. Peano numbers, implicit arguments) having a restricted toolbox of
+syntactic constructions in the calculus.
+
+Consider for example one of the point reached while proving the distributivity
+of times over minus on natural numbers included in the \MATITA{} standards
+library. (Part of) the reached sequent can be seen in \MATITA{} both using the
+notation for various arithmetical and relational operator or without using it.
+The sequent rendered without using notation would be as follows:
+\sequent{
+\mathtt{H}: \mathtt{le} z y\\
+\mathtt{Hcut}: \mathtt{eq} \mathtt{nat} (\mathtt{plus} (\mathtt{times} x (\mathtt{minus}
+y z)) (\mathtt{times} x z))\\
+(\mathtt{plus} (\mathtt{minus} (\mathtt{times} x y) (\mathtt{times} x z))
+(\mathtt{times} x z))}{
+\mathtt{eq} \mathtt{nat} (\mathtt{times} x (\mathtt{minus} y z)) (\mathtt{minus}
+(\mathtt{times} x y) (\mathtt{times} x z))}
+while the corresponding sequent rendered with notation enabled would be:
+\sequent{
+H: z\leq y\\
+Hcut: x*(y-z)+x*z=x*y-x*z+x*z}{
+x*(y-z)=x*y-x*z}
+