From f59e3c55dde91612a2e8a16335f2a2e9137fde5f Mon Sep 17 00:00:00 2001 From: Stefano Zacchiroli Date: Thu, 22 May 2003 07:33:44 +0000 Subject: [PATCH] first checkin - introduction - outline - auxiliary files --- helm/papers/calculemus-2003/.cvsignore | 4 + helm/papers/calculemus-2003/Makefile | 13 + .../calculemus-2003/hbugs-calculemus-2003.tex | 1020 +++++++++++++++++ helm/papers/calculemus-2003/llncs.cls | 1015 ++++++++++++++++ helm/papers/calculemus-2003/outline.txt | 53 + 5 files changed, 2105 insertions(+) create mode 100644 helm/papers/calculemus-2003/.cvsignore create mode 100644 helm/papers/calculemus-2003/Makefile create mode 100644 helm/papers/calculemus-2003/hbugs-calculemus-2003.tex create mode 100644 helm/papers/calculemus-2003/llncs.cls create mode 100644 helm/papers/calculemus-2003/outline.txt diff --git a/helm/papers/calculemus-2003/.cvsignore b/helm/papers/calculemus-2003/.cvsignore new file mode 100644 index 000000000..8f1b5b05a --- /dev/null +++ b/helm/papers/calculemus-2003/.cvsignore @@ -0,0 +1,4 @@ +*.aux +*.dvi +*.log +*.ps diff --git a/helm/papers/calculemus-2003/Makefile b/helm/papers/calculemus-2003/Makefile new file mode 100644 index 000000000..490e00046 --- /dev/null +++ b/helm/papers/calculemus-2003/Makefile @@ -0,0 +1,13 @@ +TITLE=hbugs-calculemus-2003 +all: $(TITLE).ps +%.ps: %.dvi + dvips -f $< > $@ +%.dvi: %.tex + latex $< +show: $(TITLE).dvi + xdvi $(TITLE).dvi +showps: $(TITLE).ps + gv $(TITLE).ps +clean: + rm -f $(TITLE).{ps,dvi,log,aux} +.PHONY: all show showps clean diff --git a/helm/papers/calculemus-2003/hbugs-calculemus-2003.tex b/helm/papers/calculemus-2003/hbugs-calculemus-2003.tex new file mode 100644 index 000000000..333251d6a --- /dev/null +++ b/helm/papers/calculemus-2003/hbugs-calculemus-2003.tex @@ -0,0 +1,1020 @@ +\documentclass[runningheads]{llncs} +\pagestyle{headings} +\setcounter{page}{1} +\usepackage{graphicx} +\usepackage{amsfonts} + +% \myincludegraphics{filename}{place}{width}{caption}{label} +\newcommand{\myincludegraphics}[5]{ + \begin{figure}[#2] + \begin{center} + \includegraphics[width=#3]{XFIG-EPS/#1.eps} + \caption[#4]{#5} + \label{#1} + \end{center} + \end{figure} +} + +\newcommand{\todo}[1]{\footnote{\textbf{TODO: #1}}} +\newcommand{\ws}{Web-Service} +\newcommand{\wss}{Web-Services} +\newcommand{\hbugs}{H-Bugs} +\newcommand{\helm}{HELM} + +\title{Brokers and Web-Services for Automatic Deduction: a Case Study} + +\author{Claudio Sacerdoti Coen \and Stefano Zacchiroli} + +\institute{ + Department of Computer Science\\ + University of Bologna\\ + Via di Mura Anteo Zamboni 7, 40127 Bologna, ITALY\\ + \email{sacerdot@cs.unibo.it} + \and + Department of Computer Science\\ + \'Ecole Normale Sup\'erieure\\ + 45, Rue d'Ulm, F-75230 Paris Cedex 05, FRANCE\\ + \email{zack@cs.unibo.it} +} + +\date{ } + +\begin{document} +\sloppy +\maketitle + +\begin{abstract} + We present a planning broker and several Web-Services for automatic deduction. + Each Web-Service implements one of the tactics usually available in an + interactive proof-assistant. When the broker is submitted a "proof status" (an + unfinished proof tree and a focus on an open goal) it dispatches the proof to + the Web-Services, collects the successfull results, and send them back to the + client as "hints" as soon as they are available. + + In our experience this architecture turns out to be helpful both for + experienced users (who can take benefit of distributing heavy computations) + and beginners (who can learn from it). +\end{abstract} + +\section{Introduction} + The \ws\ approach at software development seems to be a working solution for + getting rid of a wide range of incompatibilities between communicating + software applications. W3C's efforts in standardizing related technologies + grant longevity and implementations availability for frameworks based on \wss\ + for information exchange. As a direct conseguence, the number of such + frameworks in increasing and the World Wide Web is moving from a disorganized + repository of human-understandable HTML documents to a disorganized repository + of applications working on machine-understandable XML documents both for input + and output. + + This lack of organization along with the huge amount of documents available, + have posed not a few problems for data location. If this limitation could be + overcome using heuristic-based search engine for HTML pages, a satisfying + solution has yet to be found for \ws\ location \todo{location non mi piace, ma + "localization" e' anche peggio, "retrieval"?}. A promising architecture seems + to be the use of \emph{brokers}, which access point (usually an URI) is known + by the potential clients or by \emph{registries}, to which the \wss\ subscribe + to publish their functionalities. + + Since the \emph{Declaration of Linz}, the MONET Consortium \cite{MONET} is + working on the development of a framework aimed at providing a set of software + tools for the advertisement and discovering of mathematical web services. This + framework turns out to be strongly based on both \wss\ and brokers. + + Several examples of CAS (Computer Algebra System) and Theorem Prover related + \wss\ have been shown to be useful and to fit well in the MONET Architecture + \todo{citarne qualcuno: CSC???}. On the other hand \todo{troppo informale?} + the set of \wss\ related to Proof Assistants tends to be ... rather ... + empty!\todo{gia' mi immagino il tuo commento: BUHM! :-)} + + Despite of that the proof assistant case seems to be well suited to + investigate the usage of many different mathematical \wss. Indeed: most proof + assistants are still based on non-client/server architectures, are + application-centric instead of document-centric, offer a scarce level of + automation leaving entirely to the user the choice of which macro (usually + called \emph{tactic}) to use in order to make progress in a proof. + + The average proof assistant can be, for example, a client of a \ws\ + interfacing a specific or generic purpose theorem prover, or a client of a + \ws\ interfacing a CAS to simplify expressions in a particular mathematical + domain. + + The Omega Ants project \todo{date, autori e paperi}, developed before the + \emph{\ws\ era}, enrich the Omega \cite{Omega} proof assistant with an + architecture able to distribute subparts of a proof to agents (Ants) + implemented as separate processes running on the same hosts of the proof + assistant \todo{controllare ...}. + + In this paper we present an architecture, namely \hbugs\, implementing a + \emph{suggestion engine} for the proof assistant developed on behalf of the + \helm\ project. This architecture is based on a set of \wss\ and a broker (a + \ws\ itself) and is able to distribute a proof status from a client to several + \emph{tutor}, each of them can try to make progress in the proof and, in case + of success, notify the user sending him an \emph{hint}. Both the broker and + the tutors are instances of the homonymous entities of the MONET framework. + +% \section{Introduction} +% Since the development of the first proof-assistants based on the +% Curry-Howard isomorphism, it became clear that directly writing +% lambda-terms (henceforth called simply terms) was a difficult, repetitive, +% time-expensive and error prone activity; hence the introduction of +% meta-languages to describe procedures that are able to automatically +% generate the low-level terms. +% Nowadays, almost all the proof-assistants +% using terms description of proofs have many levels of abstractions, i.e. +% meta-languages, to create the terms, with the only remarkable exception +% of the ALF family \cite{ALF} in which terms are still directly written +% without any over-standing level of abstraction. +% +% In particular, there are +% usually at least two levels, that of tactics and that of the language in +% which the whole system and hence the tactics are written; once the tactics +% are implemented, to do a proof the user enters to the system a sequence of +% tactics, called a script; the script is then executed to create the term +% encoding of the proof, which is type-checked by the kernel of the +% proof-assistant. Writing a script interactively is much simpler than +% writing the term by hands; once written, though, it becomes impossible +% to understand a script without replaying it interactively. For this reason, +% we can hardly speak of the language of tactics as a high level language, +% even if it is ``compiled'' to the language of terms. +% +% To avoid confusion, in the rest of this paper we will +% avoid the use of the term ``proof'', using only ``script'' and ``term''; +% moreover, we will avoid also the terms ``proof-checking'' and ``type-checking'', +% replacing them with ``retyping'' to mean the type-checking of an already +% generated term; finally, we will use the term ``compiling'' to mean the +% execution of a script. Usually, compilation ends with +% the type-checking of the generated term; hence the choice of ``retyping'' for +% the type-checking of an already generated term. +% +% A long term consequence of the introduction of tactics has been the +% progressive lowering of interest in terms. In particular, users of +% modern proof-assistants as Coq \cite{Coq} may even ignore the existence of +% commands to show the terms generated by their scripts. These terms are +% usually very huge and quite unreadable, so they don't add any easily +% accessible information to the scripts. Hence implementors have loosed +% interest in terms too, as far as their size and compilation time are +% small enough to make the system response-time acceptable. +% When this is not the case, it is sometimes possible to trade space +% with time using reflexive tactics, in which the potentiality of complex +% type-systems to speak about themselves are exploited: reflexive tactics +% usually leads to a polynomial asympotical reduction in the size of the +% terms, at the cost of an increased reduction time during retyping and +% compilation. Often, though, reflexive tactics could not be used; moreover, +% reflexive tactics suffer of the same implementative problems of other tactics +% and so could be implemented in such a way to create huger than +% needed terms, even if asymptotically smaller than the best ones created +% without reflection. +% +% The low level of interest in terms of the implementors of +% tactics often leads to naive implementations (``If it works, it is OK'') and +% this to low-quality terms, which: +% \begin{enumerate} +% \item are huger than needed because particular cases are not taken +% into account during tactic development +% \item require more time than needed for retyping due to +% their size +% \item are particularly unreadable because they don't +% correspond to the ``natural'' way of writing the proof by hand +% \end{enumerate} +% To cope with the second problem, retyping is usually avoided allowing +% systems to reload saved terms without retyping them and using a +% checksum to ensure that the saved file has not been modified. This is +% perfectly reasonable accordingly to the traditional application-centric +% architecture of proof-assistants in which you have only one tool integrating +% all the functionalities and so you are free to use a proprietary format for data +% representation. +% +% In the last months, though, an ever increasing number of people and projects +% (see, for example, HELM \cite{EHELM}, MathWeb \cite{MATHWEB} and +% Formavie \cite{FORMAVIE}) +% have been interested to switch from the application-centric model to the +% newer content-centric one, in which information is stored in +% standard formats (that is, XML based) to allow different applications to work +% on the same set of data. As a consequence, term size really becomes an +% important issue, due to the redundancy of standard formats, and retyping +% is needed because the applications can not trust each other, hence needing +% retyping and making retyping time critical. +% Moreover, as showed by Yann Coscoy in its PhD. +% thesis \cite{YANNTHESIS} or by the Alfa interface to the Agda system +% \cite{ALFA}, it is perfectly reasonable and also feasible to try +% to produce descriptions in natural languages of formal proofs encoded as +% terms. +% This approach, combined with the further possibility of applying the usual +% two-dimensional mathematic notation to the formulas that appears in the +% terms, is being followed by projects HELM \cite{HELM}, PCOQ \cite{PCOQ} and +% MathWeb \cite{MATHWEB} with promising results. It must be understood, though, +% that the quality (in terms of naturality and readability) of this kind of +% proofs rendering heavily depends on the quality of terms, making also +% the third characteristic of low-quality terms a critical issue. +% +% A totally different scenario in which term size and retyping time are +% critical is the one introduced by Necula and Lee \cite{Necula} under +% the name Proof Carrying Code (PCC). PCC is a technique that can be used +% for safe execution of untrusted code. In a typical instance of PCC, a code +% receiver establishes a set of safety rules that guarantee safe behavior +% of programs, and the code producer creates a formal safety proof that +% proves, for the untrusted code, adherence to the safety rules. Then, the +% proof is transmitted to the receiver together with the code and it is +% retyped before code execution. While very compact representation of the +% terms, highly based on type-inference and unification, could be used +% to reduce the size and retyping time \cite{Necula2}, designing proof-assistants +% to produce terms characterized by an high level of quality is still necessary. +% +% In the next section we introduce a particular class of metrics for +% tactics evaluation. In section \ref{equivalence} we consider the +% notion of tactics equivalence and we describe one of the bad habits +% of tactics implementors, which is overkilling; we also provide and analyze +% a simple example of overkilling tactic. In the last section we describe +% a concrete experience of fixing overkilling in the implementation of a +% reflexive tactic in system Coq and we analyze the gain with respect to +% term-size, retyping time and term readability. +% +% \section{From Metrics for Terms Evaluation to Metrics for Tactics Evaluation} +% The aim of this section is to show how metrics for term evaluation could +% induce metrics for tactic evaluation. Roughly speaking, this allows us to +% valuate tactics in terms of the quality of the terms produced. Even if +% we think that these kinds of metrics are interesting and worth studying, +% it must be understood that many other valuable forms of metrics could be +% defined on tactics, depending on what we are interested in. For example, +% we could be interested on compilation time, that is the sum of the time +% required to generate the term and the retyping time for it. Clearly, only +% the second component could be measured with a term metric and a good +% implementation of a tactic with respect to the metric considered could +% effectively decide to sacrifice term quality (and hence retyping time) to +% minimize the time spent to generate the term. The situation is very close to +% the one already encountered in compilers implementation, where there is +% always a compromise, usually user configurable, between minimizing compiling +% time and maximizing code quality. +% +% The section is organized as follows: first we recall the definition of +% tactic and we introduce metrics on terms; then we give the definition +% of some metrics induced by term metrics on tactics. +% +% \begin{definition}[Of tactic] +% We define a \emph{tactic} as a function that, given a goal $G$ (that is, a local +% context plus a type to inhabit) that satisfies a list of assumptions +% (preconditions) $P$, +% returns a couple $(L,t)$ where +% $L = {L_1,\ldots,L_n}$ is a (possible empty) list of proof-obligations +% (i.e. goals) and $t$ is a function that, given a list $l = {l_1,\ldots,l_n}$ +% of terms such that $l_i$ inhabits\footnote{We say, with a small abuse of +% language, that a term $t$ inhabits a goal $G=(\Gamma,T)$ when $t$ is of type +% $T$ in the context $\Gamma$.} $L_i$ for each $i$ in ${1,\ldots,n}$, returns a +% term $t(l)$ inhabiting $G$. +% \end{definition} +% +% % You can get another looser definition of tactic just saying that a tactic +% % is a partial function instead than a total one. In this paper when referring +% % to tactics we always refer to the first definition. +% +% \begin{definition}[Of term metric] +% For any goal $G$, a term metric $\mu_G$ is any function in +% $\mathbb{N}^{\{t/t\;\mbox{inhabits}\;G\}}$. +% Two important class of term metrics are functional metrics and monotone +% metrics: +% \begin{enumerate} +% \item {\bf Functional metrics:} a metric $\mu_G$ is \emph{functional} +% if for each term context (=term with an hole) $C[]$ and for all terms +% $t_1$,$t_2$ if $\mu_G(t_1)=\mu_G(t_2)$ then $\mu_G(C[t_1])=\mu_G(C[t_2])$. +% An equivalent definition is that a metric $\mu_G$ is functional +% if for each term context $C[]$ the function +% $\lambda t.\mu_G(C[\mu_G^{-1}(t)])$ is well defined. +% \item {\bf Monotone metrics:} a metric $\mu_G$ is \emph{monotone} if for +% each term context $C[]$ and for all terms $t_1$,$t_2$ if +% $\mu_G(t_1)\le\mu_G(t_2)$ then $\mu_G(C[t_1])\le\mu_G(C[t_2])$. +% An equivalent definition is that a metric $\mu_G$ is functional if +% for each term context $C[]$ the function +% $\lambda t.\mu_G(C[\mu_G^{-1}(t)])$ is well defined and monotone. +% \end{enumerate} +% +% % Vecchie definizioni +% %\begin{enumerate} +% %\item {\bf Monotony:} a metric $\mu_G$ is \emph{monotone} if for each term +% %context (=term with an hole) $C[]$ and for all terms $t_1$,$t_2$ if +% %$\mu_G(t_1)\le\mu_G(t_2)$ then $\mu_G(C[t_1])\le\mu_G(C[t_2])$ +% %\item {\bf Strict monotony:} a monotone metric $\mu_G$ is \emph{strictly +% %monotone} if for each term context $C[]$ and +% %for all terms $t_1$,$t_2$ if $\mu_G(t_1)=\mu_G(t_2)$ then +% %$\mu_G(C[t_1])=\mu_G(C[t_2])$ +% %\end{enumerate} +% \end{definition} +% +% Typical examples of term metrics are the size of a term, the time required +% to retype it or even an estimate of its ``naturality'' (or simplicity) to be +% defined somehow; the first two are also examples of monotone metrics +% and the third one could probably be defined as to be. So, in the rest +% of this paper, we will restrict to monotone metrics, even if the +% following definitions also work with weaker properties for general metrics. +% Here, however, we are not interested in defining such metrics, but in showing +% how they naturally induce metrics for tactics. +% +% Once a term metric is chosen, we get the notion of a best term (not unique!) +% inhabiting a goal: +% \begin{definition}[Of best terms inhabiting a goal] +% the term $t$ inhabiting a goal $G$ is said to be a best term inhabiting $G$ +% w.r.t. the metric $\mu_G$ when +% $\mu_G(t) = min\{\mu_G(t') / t'\;\mbox{inhabits}\;G\}$. +% \end{definition} +% +% Using the previous notion, we can confront the behavior of two tactics +% on the same goal: +% +% \begin{definition} +% Let $\tau_1$ and $\tau_2$ be two tactics both applyable to a goal $G$ +% such that $\tau_1(G) = (L_1,t_1)$ and $\tau_2(G) = (L_2,t_2)$. +% We say that $\tau_1$ is better or equal than $\tau_2$ for the goal $G$ +% with respect to $\mu_G$ if, for all $l_1$ and $l_2$ lists of best terms +% inhabiting respectively $L_1$ and $L_2$, $\mu_G(t_1(l_1)) \le \mu_G(t_2(l_2))$ +% holds. +% \end{definition} +% Note that confronting in this way ``equivalent'' tactics +% (whose definition is precised in the next section) gives us information +% on which implementation is better; doing the same thing on tactics that +% are not equivalent, instead, gives us information about what tactic to +% apply to obtain the best proof. +% +% A (functional) metric to confront two tactics only on a particular goal +% has the nice property to be a total order, but is quite useless. Hence, +% we will now +% define a bunch of different tactic metrics induced by term metrics +% that can be used to confront the behavior of tactics when applied to +% a generic goal. Some of them will be \emph{deterministic} partial orders; +% others will be total orders, but will provide only a \emph{probabilistic} +% estimate of the behavior. Both kinds of metrics are useful in practice +% when rating tactics implementation. +% \begin{definition}[Of locally deterministic better or equal tactic] +% $\tau_1$ is a locally deterministic (or locally +% uniform) better or equal tactic +% than $\tau_2$ w.r.t. $\mu$ (and in that case we write $\tau_1 \le_{\mu} \tau_2$ +% or simply $\tau_1 \le \tau_2$), when +% for all goals $G$ satisfying the preconditions of both tactics we have that +% $\tau_1$ is better or equal than $\tau_2$ w.r.t. the metric $\mu_G$. +% \end{definition} +% +% \begin{definition}[Of locally deterministic better tactic] +% $\tau_1$ is a locally deterministic (or locally uniform) +% better tactic than $\tau_2$ +% w.r.t. $\mu$ (and in that case we write $\tau_1 <_{\mu} \tau_2$ +% or simply $\tau_1 < \tau_2$), when $\tau_1 \le_{\mu} \tau_2$ and +% exists a goal $G$ satisfying the preconditions of both tactics such that +% $\tau_1$ is better (but not equal!) than $\tau_2$ w.r.t. the metric $\mu_G$. +% \end{definition} +% +% \begin{definition}[Of locally probabilistic better or equal tactic of a factor K] +% $\tau_1$ is said to be a tactic locally probabilistic better or equal of a +% factor $0.5 \le K \le 1$ than $\tau_2$ w.r.t. $\mu$ and a particular expected +% goals distribution when the probability of having $\tau_1$ better or equal than +% $\tau_2$ w.r.t. the metric $\mu_G$ is greater or equal to $K$ when $G$ is +% chosen randomly according to the distribution. +% \end{definition} +% The set of terms being discrete, you can note that a deterministically better +% or equal tactic is a tactic probabilistically better or equal of a factor 1. +% +% +% To end this section, we can remark the strong dependence of the $\le$ +% relation on the choice of metric $\mu$, so that it is easy to find +% two metrics $\mu_1,\mu_2$ such that $\tau_1 <_{\mu_1} \tau_2$ and +% $\tau_2 <_{\mu_2} \tau_1$. Luckily, tough, the main interesting metrics, +% term size, retyping time and naturality, are in practice highly correlated, +% though the correlation of the third one with the previous two could be a +% bit surprising. So, in the following section, we +% will not state what is the chosen term metric; you may think as +% any of them or even at some kind of weighted mean. +% +% +% \section{Equivalent Tactics and Overkilling} +% \label{equivalence} +% We are now interested in using the metrics defined in the previous section +% to confront tactics implementation. Before doing so, though, we have to +% identify what we consider to be different implementations of the +% same tactic. Our approach consists in identifying every implementation +% with the tactic it implements and then defining appropriate notions of +% equivalence for tactics: two equivalent tactics will then be considered +% as equivalent implementations and will be confronted using metrics. +% +% Defining two tactics as equivalent when they can solve exactly the same +% set of goals generating the same set of proof-obligations seems quite natural, +% but is highly unsatisfactory if not completely wrong. The reason is that, for +% equivalent tactics, we would like to have the \emph{property of substitutivity}, +% that is substituting a tactic for an equivalent one in a script should give +% back an error-free script\footnote{A weaker notion of substitutivity is that +% substituting the term generated by a tactic for the term generated by an +% equivalent one in a generic well-typed term should always give back a +% well-typed term.}. In logical frameworks with dependent types, +% without proof-irrelevance and with universes as CIC \cite{Werner} though, +% it is possible for a term to inspect the term of a previous proof, behaving +% in a different way, for example, if the constructive proof of a conjunction +% is made proving the left or right side. +% So, two tactics, equivalent w.r.t. the previous +% definition, that prove $A \vee A$ having at their disposal an +% hypothesis $A$ proving the first one the left and the second one +% the right part of the conjunction, could not be substituted one for the +% other if a subsequent term inspects the form of the generated proof. +% +% Put in another way, it seems quite reasonable to derive equivalence for +% tactics from the definition of an underlying equivalence for terms. +% The simplest form of such an equivalence relation is convertibility +% (up to proof-irrelevance) of closed terms and this is the relation +% we will use in this section and the following one. In particular, +% we will restrict ourselves to CIC and hence to +% $\beta\delta\iota$-convertibility\footnote{The Coq proof-assistant +% introduces the notion of \emph{opaque} and \emph{transparent} terms, +% differing only for the possibility of being inspected. Because the +% user could change the opacity status at any time, the notion of +% convertibility we must conservatively choose for the terms of Coq is +% $\beta\delta\iota$-convertibility after having set all the definitions +% as transparent.}. +% Convertibility, though, is a too restrictive notion that does not take +% in account, for example, commuting conversions. Looking for more suitable +% notions of equivalence is our main open issue for future work. +% +% \begin{definition}[Of terms closed in a local environment] +% A term $t$ is \emph{closed} in a local environment $\Gamma$ when $\Gamma$ +% is defined on any free variable of $t$. +% \end{definition} +% +% \begin{definition}[Of equivalent tactics] +% We define two tactics $\tau_1$ and $\tau_2$ to be equivalent (and +% we write $\tau_1 \approx \tau_2$) when for each goal $G = (\Gamma,T)$ and for +% each list of terms closed in $\Gamma$ and inhabiting the proof-obligations +% generated respectively by $\tau_1$ and $\tau_2$, we have that the result terms +% produced by $\tau_1$ and $\tau_2$ are $\beta\delta\iota$-convertible. +% \end{definition} +% +% Once we have the definition of equivalent tactics, we can use metrics, +% either deterministic or probabilistic, to confront them. In particular, +% in the rest of this section and in the following one we will focus on the +% notion of deterministically overkilling tactic, defined as follows: +% +% \begin{definition}[Of overkilling tactics] +% A tactic $\tau_1$ is (deterministically) overkilling w.r.t. a metric +% $\mu$ when there exists another tactic $\tau_2$ such that +% $\tau_1 \approx \tau_2$ and $\tau_2 <_\mu \tau_1$. +% \end{definition} +% +% Fixing an overkilling tactic $\tau_1$ means replacing it with the +% tactic $\tau_2$ which is the witness of $\tau_1$ being overkilling. +% Note that the fixed tactic could still be overkilling. +% +% The name overkilling has been chosen because most of the time overkilling +% tactics are tactics that do not consider special cases, following the general +% algorithm. While in computer science it is often a good design pattern to +% prefer general solutions to ad-hoc ones, this is not a silver bullet: +% an example comes another time from compiler technology, where +% ad-hoc cases, i.e. optimizations, are greatly valuable if not necessary. +% In our context, ad-hoc cases could be considered either as optimizations, +% or as applications of Occam's razor to proofs to keep the simplest one. +% +% \subsection{A Simple Example of Overkilling Tactic} +% A first example of overkilling tactic in system Coq is +% \emph{Replace}, that works in this way: when the current goal +% is $G = (\Gamma,T)$, the +% tactic ``{\texttt Replace E$_1$ with E$_2$.}'' always produces a new +% principal proof-obligation $(\Gamma, T\{E_2/E_1\})$ and an auxiliary +% proof-obligation $(\Gamma, E_1=E_2)$ and uses the elimination scheme +% of equality on the term $E_1 = E_2$ and the two terms that inhabit the +% obligations to prove the current goal. +% +% To show that this tactic is overkilling, we will provide an example +% in which the tactic fails to find the best term, we will +% propose a different implementation that produces the best term and we will +% show the equivalence with the actual one. +% +% The example consists in applying the tactic in the case in which $E_1$ is +% convertible to $E_2$: the tactic proposes to the user the two +% proof-obligations and then builds the term as described above. We claim +% that the term inhabiting the principal proof-obligation also inhabits +% the goal and, used as the generated term, is surely smaller +% and quicker to retype than the one that is generated in the +% implementation; moreover, it +% is also as natural as the previous one, in the sense that the apparently +% lost information has simply become implicit in the reduction and could +% be easily rediscovered using type-inference algorithms as the one described +% in Coscoy's thesis \cite{YANNTHESIS}. So, a new implementation could +% simply recognize this special case and generate the better term. +% +% We will now show that the terms provided by the two implementations are +% $\beta\delta\iota$-convertible. +% Each closed terms in $\beta\delta\iota$-normal form +% inhabiting the proof that $E_1$ is equal to $E_2$ is equal to the only +% constructor of equality applied to a term convertible to the type of +% $E_1$ and to another term convertible to $E_1$; hence, once the principle +% of elimination of equality is applied to this term, we can first apply +% $\beta$-reduction and then $\iota$-reduction to obtain the term inhabiting +% the principal proof-obligation in which $E_1$ has been replaced by $E_2$. +% Since $E_1$ and $E_2$ are $\beta\delta\iota$-convertible by hypothesis and +% for the congruence properties of convertibility in CIC, we have that the +% generated term is $\beta\delta\iota$-convertible to the one inhabiting the +% principal proof-obligation. +% +% %The simplest example is when $E_1$ and $E_2$ are syntactically equal: +% %in this case the principal obligation proposed is identical to the current goal +% %and the equality elimination introduced could be replaced by the proof of +% %this obligation, leading to a smaller, quicker to retype and also more +% %natural\footnote{Even if naturality is in some way a subjective metric, +% %who would dare say that a proof in which you rewrite an expression with +% %itself is natural?} term. To show that handling this special case in this +% %way is equivalent to the current solution, we have to show that the two +% %terms are $\beta\delta\iota$-convertible when so are the subterms inhabiting +% %the obligations. +% %The only closed term in $\beta\delta\iota$-normal form +% %inhabiting the proof that $E_1$ is equal to $E_1$ is the only constructor +% %of equality applied to the type of $E_1$ and to $E_1$; hence we can +% %first apply $\beta$-reduction and then $\iota$-reduction to obtain +% %the second proof in which $E_1$ has been replaced by $E_1$, i.e. the +% %second proof. +% +% %So, for this example, the simple fix to avoid overkilling is checking +% %if $E_1$ and $E_2$ are syntactically equal and in case omitting the elimination; +% %curiously, this is done in Coq in the implementation of a very similar tactic +% %called \emph{Rewriting}. +% +% +% %A second, more involved, example is when $E_1$ and $E_2$ are not +% %syntactically equal, but $E_1$ reduces to $E_2$. Even in this case +% %the elimination could simply be avoided because the typing rules of +% %the logical system ensures us that a term that inhabits the principal +% %obligation also inhabits the current goal: the proof is equivalent to the +% %previous one, but for the last step in which $E_2$ is actually substituted for +% %$E_1$; by hypothesis, though, the two terms are $\beta\delta\iota$-convertible +% %and hence the thesis. +% +% %Surely smaller and faster to retype, the new term is also as natural +% %as the previous one, in the sense that the apparently lost information has +% %simply become implicit in the reduction and could be easily rediscovered +% %using type-inference algorithms as the one described in chapter ??? +% %of Coscoy's thesis \ref{YANNTHESIS}. +% +% This example may seem quite stupid because, if the user is already able to +% prove the principal proof-obligation and because this new goal is totally +% equivalent to the original one, the user could simply redo the same steps +% without applying the rewriting at all. Most of the time, though, the +% convertibility of the two terms could be really complex to understand, +% greatly depending on the exact definitions given; indeed, the user could +% often be completely unaware of the convertibility of the two terms. Moreover, +% even in the cases in which the user understands the convertibility, the +% tactic has the important effect of changing the form of the current +% goal in order to simplify the task of completing the proof, which is the reason +% for the user to apply it. +% +% ~\\ +% +% The previous example shows only a very small improvement in the produced +% term and could make you wonder if the effort of fixing overkilling and +% more in general if putting more attention to terms when implementing +% tactics is really worth the trouble. In the next section we describe as +% another example a concrete experience of fixing a complex reflexive tactic +% in system Coq that has lead to really significant improvements in term +% size, retyping time and naturality. +% +% \section{Fixing Overkilling: a Concrete Experience} +% Coq provides a reflexive tactic called \emph{Ring} to do associative-commutative +% rewriting in ring and semi-ring structures. The usual usage is, +% given the goal $E_1 = E_2$ where $E_1$ and $E_2$ are two expressions defined +% on the ring-structure, to prove the goal reducing it to proving +% $E'_1 = E'_2$ where $E'_i$ is the normal form of $E_i$. In fact, once +% obtained the goal $E'_1 = E'_2$, the tactic also tries to apply simple +% heuristics to automatically solve the goal. +% +% The actual implementation of the tactic by reflexion is quite complex and +% is described in \cite{Ring}. The main idea is described in Fig. \ref{ring1}: +% first of all, an inductive data type to describe abstract polynomial is +% made available. On this abstract polynomial, using well-founded recursion in +% Coq, a normalization function named $apolynomial\_normalize$ is defined; +% for technical reasons, the abstract data type of normalized polynomials +% is different from the one of un-normalized polynomials. Then, two +% interpretation functions, named $interp\_ap$ and $interp\_sacs$ are given +% to map the two forms of abstract polynomials to the concrete one. Finally, +% a theorem named $apolynomial\_normalize\_ok$ stating the equality of +% the interpretation of an abstract polynomial and the interpretation of +% its normal form is defined in Coq using well-founded induction. The above +% machinery could be used in this way: to prove that $E^I$ is equal to its +% normal form $E^{IV}$, the tactic computes an abstract polynomial $E^{II}$ that, +% once interpreted, reduces to $E^{I}$, and such that the interpretation +% of $E^{III} = (apolynomial\_normalize E^{II})$ could be shown to be equal +% to $E^{IV}$ applying $apolynomial\_normalize\_ok$. +% +% %such that $(interp\_ap\;E^{II})$ could be proved (theorem +% %$apolynomial\_normalize\_ok$) in Coq to be equal to +% %$(interp\_sacs\;E^{III})$ where +% %$E^{III} = (apolynomial\_normalize E^{II})$ is another abstract polynomial +% %in normal form and the three functions $interp\_ap$, $interp\_sacs$ and +% %$apolynomial\_normalize$ could all be defined inside Coq using well-founded +% %recursion/induction on the abstract polynomial definition. +% +% % \myincludegraphics{ring1}{t}{12cm}{Reflexion in Ring}{Reflexion in Ring} +% % \myincludegraphics{ring2}{t}{10cm}{Ring implementation (first half)}{Ring implementation (first half)} +% +% In Fig. \ref{ring2} the first half of the steps taken +% by the Ring tactic to prove $E_1 = E_2$ are shown\footnote{Here $E_1$ +% stands for $E^I$.}. +% The first step is replacing $E_1 = E_2$ with $(interp\_ap\;E^{II}_1) = E_2$, +% justifying the rewriting using the only one constructor of equality due to +% the $\beta\delta\iota$-convertibility of $(interp\_ap\;E^{II}_1)$ with $E_1$. +% The second one is replacing $(interp\_ap\;E^{II})$ with +% $(interp\_sacs\;E^{III})$, justifying the rewriting using +% $apolynomial\_normalize\_ok$. +% +% Next, the two steps are done again on the left part of the equality, +% obtaining $(interp\_sacs\;E^{III}_1) = (interp\_sacs\;E^{III}_2)$, +% that is eventually solved trying simpler tactics as \emph{Reflexivity} or left +% to the user. +% +% The tactic is clearly overkilling, at least due to the usage of rewriting for +% convertible terms. Let's consider as a simple example the session in Fig. +% \ref{session}: +% \begin{figure}[t] +% %\begin{verbatim} +% \mbox{\hspace{3cm}\tt Coq $<$ Goal ``0*0==0``.}\\ +% \mbox{\hspace{3cm}\tt 1 subgoal}\\ +% ~\\ +% \mbox{\hspace{3cm}\tt ~~=================}\\ +% \mbox{\hspace{3cm}\tt ~~~~``0*0 == 0``}\\ +% ~\\ +% \mbox{\hspace{3cm}\tt Unnamed\_thm $<$ Ring.}\\ +% \mbox{\hspace{3cm}\tt Subtree proved!} +% %\end{verbatim} +% \caption{A Coq session.} +% \label{session} +% \end{figure} +% in Fig. \ref{before} the $\lambda$-term created by the +% original overkilling implementation of Ring is shown. Following the previous +% explanation, it should be easily understandable. In particular, the +% four rewritings are clearly visible as applications of $eqT\_ind$, as +% are the two applications of $apolynomial\_normalize\_ok$ and the +% three usage of reflexivity, i.e. the two applications of $refl\_eqT$ +% to justify the rewritings on the left and right members of the equality +% and the one that ends the proof. +% +% Let's start the analysis of overkilling in this implementation: +% \begin{description} +% \item[Always overkilling rewritings:] as already stated, four of the rewriting +% steps are always overkilling because the rewritten term is convertible to +% the original one due to the tactic implementation. +% As proved in the previous section, all these rewritings could be simply +% removed obtaining an equivalent tactic. +% \item[Overkilling rewritings due to members already normalized:] it may happen, +% as in our example, that one (or even both) of the two members is already in +% normal form. In this case the two rewriting steps for that member could be +% simply removed obtaining an equivalent tactic as shown in the previous section. +% \item[Rewriting followed by reflexivity:] after having removed all +% the overkilling rewritings, the general form of the $\lambda$-term produced +% for $E_1 = E_2$ is the application of two rewritings ($E'_1$ for $E_1$ and +% $E'_2$ for $E_2$), followed by a proof of $E'_1 = E'_2$. In many cases, +% $E'_1$ and $E'_2$ are simply convertible and so the tactic finishes the +% proof with an application of reflexivity to prove the equivalent goal +% $E'_1 = E'_1$. +% A smaller and also more natural solution is just to +% rewrite $E'_1$ for $E_1$ and then proving $E'_1 = E_2$ applying the lemma +% stating the symmetry of equality to the proof of $E_2 = E'_2$. +% The equivalence to the original +% tactic is trivial by $\beta\iota$-reduction because the lemma is proved +% exactly doing the rewriting and then applying reflexivity: +% $$ +% \begin{array}{l} +% \lambda A:Type.\\ +% \hspace{0.2cm}\lambda x,y:A.\\ +% \hspace{0.4cm}\lambda H:(x==y).\\ +% \hspace{0.6cm}(eqT\_ind~A~x~ [x:A]a==x~(refl\_eqT~A~x)~y~H) +% \end{array} +% $$ +% \end{description} +% In Fig. \ref{after} is shown the $\lambda$-term created by the same +% tactic after having fixed all the overkilling problems described above. +% +% \begin{figure} +% \begin{verbatim} +% Unnamed_thm < Show Proof. +% LOC: +% Subgoals +% Proof: +% (eqT_ind R +% (interp_ap R Rplus Rmult ``1`` ``0`` Ropp (Empty_vm R) +% (APmult AP0 AP0)) [r:R]``r == 0`` +% (eqT_ind R +% (interp_sacs R Rplus Rmult ``1`` ``0`` Ropp (Empty_vm R) +% Nil_varlist) [r:R]``r == 0`` +% (eqT_ind R +% (interp_ap R Rplus Rmult ``1`` ``0`` Ropp (Empty_vm R) AP0) +% [r:R] +% ``(interp_sacs R Rplus Rmult 1 r Ropp (Empty_vm R) Nil_varlist) +% == r`` +% (eqT_ind R +% (interp_sacs R Rplus Rmult ``1`` ``0`` Ropp (Empty_vm R) +% Nil_varlist) +% [r:R] +% ``(interp_sacs R Rplus Rmult 1 r Ropp (Empty_vm R) +% Nil_varlist) == r`` (refl_eqT R ``0``) +% (interp_ap R Rplus Rmult ``1`` ``0`` Ropp (Empty_vm R) AP0) +% (apolynomial_normalize_ok R Rplus Rmult ``1`` ``0`` Ropp +% [_,_:R]false (Empty_vm R) RTheory AP0)) ``0`` +% (refl_eqT R ``0``)) +% (interp_ap R Rplus Rmult ``1`` ``0`` Ropp (Empty_vm R) +% (APmult AP0 AP0)) +% (apolynomial_normalize_ok R Rplus Rmult ``1`` ``0`` Ropp +% [_,_:R]false (Empty_vm R) RTheory (APmult AP0 AP0))) ``0*0`` +% (refl_eqT R ``0*0``)) +% \end{verbatim} +% \caption{The $\lambda$-term created by the original overkilling implementation} +% \label{before} +% \end{figure} +% +% \begin{figure} +% \begin{verbatim} +% Unnamed_thm < Show Proof. +% LOC: +% Subgoals +% Proof: +% (sym_eqT R ``0`` ``0*0`` +% (apolynomial_normalize_ok R Rplus Rmult ``1`` ``0`` Ropp +% [_,_:R]false (Empty_vm R) RTheory (APmult AP0 AP0))) +% \end{verbatim} +% \caption{The $\lambda$-term created by the new implementation} +% \label{after} +% \end{figure} +% +% \clearpage +% \subsection{A Quantitative Analysis of the Gain Obtained} +% Let's now try a quantitative analysis of the gain with respect to term size, +% retyping time and naturality, considering the two interesting cases of +% no member or only one member already in normal form\footnote{If the two +% members are already in normal form, the new implementation simply applies +% once the only constructor of the equality to one of the two members. The tactic +% is also implemented to do the same thing also when the two members are not yet +% in normal forms, but are already convertible. We omit this other improvement +% in our analysis.}. +% +% \subsubsection{Term Size.} +% \paragraph{Terms metric definition:} given a term $t$, the metric $|.|$ associates to it its number of nodes $|t|$. +% \paragraph{Notation}: $|T|$ stands for the number of nodes in the actual parameters +% given to $interp\_ap$, $interp\_sacs$ and $apolynomial\_normalize\_ok$ to +% describe the concrete (semi)ring theory and the list of non-primitive +% terms occurring in the goal to solve. In the example in figures \ref{before} +% and \ref{after}, $|T|$ is the number of nodes in +% \begin{texttt}[R Rplus Rmult ``1`` ``0`` Ropp (Empty\_vm R)]\end{texttt}. +% $|R|$ stands for the number of nodes in the term which is the carrier +% of the ring structure. In the same examples, $|R|$ is simply 1, i.e. +% the number of nodes in \begin{texttt}R\end{texttt}. +% +% \begin{displaymath} +% \begin{array}{l} +% \mbox{\bf Original version:}\\ +% \begin{array}{ll} +% 1 + (|E^{II}_1| + |T| + 1) + |E_2| + |E_1| + & +% \mbox{(I rewriting Left)} \\ +% 1 + |E_1| + & +% \mbox{(justification)} \\ +% 1 + (|E^{III}_1| + |T| + 1) + |E_2| + (|E^{II}_1| + |T| + 1) + & +% \mbox{(II rewriting Left)} \\ +% (|E^{II}_1| + |T| + 1) + & +% \mbox{(justification)} \\ +% 1 + (|E^{II}_2| + |T| + 1) + (|E^{III}_1| + |T| + 1) + |E_2| + & +% \mbox{(I rewriting Right)} \\ +% 1 + |E_2| + & +% \mbox{(justification)} \\ +% 1 + (|E^{III}_2| + |T| + 1) + (|E^{III}_1| + |T| + 1) + & +% \mbox{(II rewriting Right)} \\ +% ~(|E^{II}_2| + |T| + 1) +& \\ +% (|E^{II}_2| + |T| + 1) + & +% \mbox{(justification)} \\ +% 1 + |E_1| = & +% \mbox{(reflexivity application)} \\ +% \hline +% 4|E_1| + 2|E_2| + 3|E^{II}_1| + 3|E^{II}_2| + 3|E^{III}_1| + |E^{III}_2| +~ & +% \mbox{\bf Total number} \\ +% ~10|T| + 17 & +% \end{array} +% \end{array} +% \end{displaymath} +% +% \begin{displaymath} +% \begin{array}{l} +% \mbox{\bf New version, both members not in normal form:}\\ +% \begin{array}{ll} +% 1 + |R| + |E_1| + |E'_2| + & +% \mbox{(Rewriting Right)} \\ +% 1 + |T| + |E^{II}_2| + & +% \mbox{(justification)} \\ +% 1 + |R| + |E'_2| + |E'_1| + |E_2| + & +% \mbox{(Symmetry application)} \\ +% 1 + |T| + |E^{II}_1| = & +% \mbox{(justification)} \\ +% \hline +% 2|E_1| + |E_2| + |E^{II}_1| + |E^{II}_2| + 2|E'_2| + 2|T| +~ & +% \mbox{\bf Total number} \\ +% ~2|R| + 4 & \\ +% ~ & +% \end{array} +% \end{array} +% \end{displaymath} +% \begin{displaymath} +% \begin{array}{l} +% \mbox{\bf New version, only the first member not in normal form:}\\ +% \begin{array}{ll} +% 1 + |R| + |E_1| + |E'_2| + & +% \mbox{(Rewriting)} \\ +% 1 + |T| + |E^{II}_2| = & +% \mbox{(justification)} \\ +% \hline +% |E_1| + |E'_2| + |E^{II}_2| + |T| + |R| + 2~~~ & +% \mbox{\bf Total number} \\ +% ~ & +% \end{array} +% \end{array} +% \end{displaymath} +% +% While the overall space complexity of the terms generated by the new +% implementation is asymptotically equal to the one of the old implementation, +% all the constants involved are much smaller, but for the one of +% $E'_2$ (the two normal forms) that before was 0 and now is +% equal to 2. Is it possible to have goals for which the new implementation +% behaves worst than the old one? Unfortunately, yes. This happens when +% the size of the two normal forms $E'_1$ and $E'_2$ is greatly huger than +% ($E^{II}_1 + |T| + 1)$ and $(E^{II}_2 + |T| + 1)$. This happens when +% the number of occurrences of non-primitive terms is much higher than +% the number of non-primitive terms and the size of them is big. More +% formally, being $m$ the number of non-primitive terms, $d$ the average +% size and $n$ the number of occurrences, the new implementation creates bigger +% terms than the previous one if +% \begin{displaymath} +% n \log_2 m + m d < n d +% \end{displaymath} +% where the difference between the two members is great enough to hide the gain +% achieved lowering all the other constants. +% The logarithmic factor in the previous +% formula derives from the implementation of the map of variables to +% non-primitive terms as a tree and the representation of occurrences with +% the path inside the tree to retrieve the term. +% +% To fix the problem, for each non-primitive term occurring more than once +% inside the normal forms, we can use a \emph{let \ldots in} local definition +% to bind it to a fresh identifier; then we replace every occurrence of +% the term inside the normal forms with the appropriate +% identifier\footnote{This has not yet been implemented in Coq.}. +% In this way, the above inequation becomes +% \begin{displaymath} +% n \log_2 m + m d < n + m d +% \end{displaymath} +% that is never satisfied. +% +% Here it is important to stress how the latest problem was easily +% overlooked during the implementation and has been discovered +% only during the previous analysis, strengthening our belief in +% the importance of this kind of analysis for tactic implementations. +% +% In the next two paragraphs we will consider only the new implementation +% with the above fixing. +% +% \subsubsection{Retyping Time.} +% \paragraph{Terms metric definition:} given a term $t$, the metric $|.|$ associates to it the time $|t|$ required to retype it. +% +% Due to lack of space, we will omit a detailed analysis as the one given +% for terms size. Nevertheless, we can observe that the retyping time required +% is surely smaller because all the type-checking operations required for +% the new implementation are already present in the old one, but for the +% type-checking of the two normal forms, that have fewer complexity +% than the type-checking of the two abstract normal forms, and the +% \emph{let \ldots in} definitions that have the same complexity of the +% type-checking of the variable map. Moreover, the quite expensive operation +% of computing the two normal forms is already done during proof construction. +% +% %In the case in which both the members of the equality are not in normal form, +% %we can't expect a great improvement in retyping time but for very cheap +% %normalizations; this because retyping time is dominated by the factor due to +% %$apolynomial\_normalize$ that is present in both implementations and is +% %usually much higher than the factor due to the removed applications of +% %$interp\_ap$, $interp\_sacs$ and the eliminations of equality. +% +% %The situation is very different if one of the two members is already +% %convertible to its normal form and the reduction involved in the normalization +% %is expensive. In this rather unlikely case, the new implementation +% %avoids the reduction at all, roughly halving the overall retyping time. +% +% In section \ref{benchmarks} we present some benchmarks to give an idea of +% the real gain obtained. +% +% \subsubsection{Naturality.}~\\~\\ +% The idea behind the \emph{Ring} tactic is to be able to prove +% an equality showing that both members have the same normal form. +% This simply amounts to show that each member is equal to the same +% normal form, that is exactly what is done in the new implementation. +% Indeed, every step that belonged to the old implementation and has been +% changed or removed to fix overkilling used to lead to some unnatural step: +% \begin{enumerate} +% \item The fact that the normalization is not done on the concrete +% representation, but passing through two abstract ones that are +% interpreted on the concrete terms is an implementative detail that +% was not hidden as much as possible as it should be. +% \item Normalizing a member of the equality that is already in normal form, +% is illogical and so unnatural. Hence it should be avoided, but it was not. +% \item The natural way to show $A=B$ under the hypothesis $B=A$ is just +% to use the symmetric property of equality. Instead, the old implementation +% rewrote $B$ with $A$ using the hypothesis and proved the goal by reflexivity. +% \item Using local definitions (\emph{let \ldots in}) as abbreviations +% rises the readability of the proof by shrinking its size removing +% subexpressions that are not involved in the computation. +% \end{enumerate} +% +% \subsection{Some Benchmarks} +% \label{benchmarks}To understand the actual gain in term size and retyping +% time on real-life examples, we have done some benchmarks on the whole set +% of theorems in the standard library of Coq that use the Ring tactic. The +% results are shown in table~\ref{benchs}. +% +% Term size is the size of the disk dump of the terms. Re-typing time is the +% user time spent by Coq in proof-checking already parsed terms. The reduction +% of the terms size implies also a reduction in Coq parsing time, that is +% difficult to compute because Coq files do not hold single terms, but whole +% theories. Hence, the parsing time shown is really the user time spent by Coq +% to parse not only the terms on which we are interested, but also all the +% terms in their theories and the theories on which they depend. So, this last +% measure greatly under-estimates the actual gain. +% +% Every benchmark has been repeated 100 times under different load conditions on +% a 600Mhz Pentium III bi-processor equipped with 256Mb RAM. The timings shown +% are mean values. +% +% \begin{table} +% \begin{center} +% \begin{tabular}{|l|c|c|c|} +% \hline +% & ~Term size~ & Re-typing time & Parsing time\\ +% \hline +% Old implementation & 20.27Mb & 4.59s & 2.425s \\ +% \hline +% New implementation & 12.99Mb & 2.94s & 2.210s \\ +% \hline +% Percentage reduction & 35.74\% & 35.95\% & 8.87\% \\ +% \hline +% \end{tabular} +% \end{center} +% \caption{Some benchmarks} +% \label{benchs} +% \end{table} +% +% \section{Conclusions and Future Work} +% Naive ways of implementing tactics lead to low quality terms that are +% difficult to inspect and process. To improve the situation, we show +% how metrics defined for terms naturally induce metrics for tactics and +% tactics implementation and we advocate the usage of such metrics for +% tactics evaluation and implementation. In particular, metrics could +% be used to analyze the quality of an implementation or could be used at run +% time by a tactic to choose what is the best way to proceed. +% +% To safely replace a tactic implementation with another one, it is important +% to define when two tactics are equivalent, i.e. generate equivalent terms. +% In this work, the equivalence relation chosen for terms has simply been +% $\beta\delta\iota$-convertibility, that in many situations seems too strong. +% Hence, an important future work is the study of weaker forms of term +% equivalence and the equivalence relations they induce on tactics. In particular, +% it seems that proof-irrelevance, $\eta$-conversion and commuting conversions +% must all be considered in the definition of a suitable equivalence relation. + +\begin{thebibliography}{01} + +% \bibitem{ALF} The ALF family of proof-assistants:\\ +% {\tt http://www.cs.chalmers.se/ComputingScience/Research/\\Logic/implementation.mhtml} +% +% \bibitem{Coq} The Coq proof-assistant:\\ +% {\tt http://coq.inria.fr/} +% +% \bibitem{FORMAVIE} The Formavie project:\\ +% {\tt http://http://www-sop.inria.fr/oasis/Formavie/} +% +% \bibitem{EHELM} The HELM project:\\ +% {\tt http://www.cs.unibo.it/helm/} +% +% \bibitem{MATHWEB} The MathWeb project:\\ +% {\tt http://www.mathweb.org/} +% +% \bibitem{PCOQ} The PCoq project:\\ +% {\tt http://www-sop.inria.fr/lemme/pcoq/} +% +% \bibitem{HELM} A.Asperti, L.Padovani, C.Sacerdoti Coen, I.Schena. +% Towards a library of formal mathematics. Panel session of +% the 13th International Conference on Theorem Proving in Higher Order Logics (TPHOLS'2000), +% Portland, Oregon, USA. +% +% \bibitem{Ring} S. Boutin. Using reflection to build efficient and certified +% decision procedures. In Martin Abadi and Takahashi Ito, editors, TACS'97, +% volume 1281. LNCS, Springer-Verlag, 1997. +% +% \bibitem{YANNTHESIS} Y.Coscoy. \emph{Explication textuelle de preuves pour le +% Calcul des Constructions Inductives}, PhD. Thesis, Universit\'e de Nice-Sophia +% Antipolis, 2000. +% +% \bibitem{ALFA} T. Hallgreen, Aarne Ranta. An Extensible Proof Text Editor. +% Presented at LPAR2000. +% +% \bibitem{Necula} G. Necula, P. Lee. Safe Kernel Extensions Without Run-Time +% Checking. Presented at OSDI'96, October 1996. +% +% \bibitem{Necula2} G. Necula, P. Lee. Efficient Representation and Validation of Proofs. Presented at LICS'98, June 1998 +% +% \bibitem{Werner} B. Werner. \emph{Une Th\'eorie des Constructions Inductives}, +% PhD. Thesis, Universit\'e Paris VII, May 1994. + +\end{thebibliography} + +\end{document} diff --git a/helm/papers/calculemus-2003/llncs.cls b/helm/papers/calculemus-2003/llncs.cls new file mode 100644 index 000000000..df98f8af5 --- /dev/null +++ b/helm/papers/calculemus-2003/llncs.cls @@ -0,0 +1,1015 @@ +% LLNCS DOCUMENT CLASS -- version 2.8 +% for LaTeX2e +% +\NeedsTeXFormat{LaTeX2e}[1995/12/01] +\ProvidesClass{llncs}[2000/05/16 v2.8 +^^JLaTeX document class for Lecture Notes in Computer Science] +% Options +\let\if@envcntreset\iffalse +\DeclareOption{envcountreset}{\let\if@envcntreset\iftrue} +\DeclareOption{citeauthoryear}{\let\citeauthoryear=Y} +\DeclareOption{oribibl}{\let\oribibl=Y} +\let\if@custvec\iftrue +\DeclareOption{orivec}{\let\if@custvec\iffalse} +\let\if@envcntsame\iffalse +\DeclareOption{envcountsame}{\let\if@envcntsame\iftrue} +\let\if@envcntsect\iffalse +\DeclareOption{envcountsect}{\let\if@envcntsect\iftrue} +\let\if@runhead\iffalse +\DeclareOption{runningheads}{\let\if@runhead\iftrue} + +\let\if@openbib\iffalse +\DeclareOption{openbib}{\let\if@openbib\iftrue} + +\DeclareOption*{\PassOptionsToClass{\CurrentOption}{article}} + +\ProcessOptions + +\LoadClass[twoside]{article} +\RequirePackage{multicol} % needed for the list of participants, index + +\setlength{\textwidth}{12.2cm} +\setlength{\textheight}{19.3cm} + +% Ragged bottom for the actual page +\def\thisbottomragged{\def\@textbottom{\vskip\z@ plus.0001fil +\global\let\@textbottom\relax}} + +\renewcommand\small{% + \@setfontsize\small\@ixpt{11}% + \abovedisplayskip 8.5\p@ \@plus3\p@ \@minus4\p@ + \abovedisplayshortskip \z@ \@plus2\p@ + \belowdisplayshortskip 4\p@ \@plus2\p@ \@minus2\p@ + \def\@listi{\leftmargin\leftmargini + \parsep 0\p@ \@plus1\p@ \@minus\p@ + \topsep 8\p@ \@plus2\p@ \@minus4\p@ + \itemsep0\p@}% + \belowdisplayskip \abovedisplayskip +} + +\frenchspacing +\widowpenalty=10000 +\clubpenalty=10000 + +\setlength\oddsidemargin {63\p@} +\setlength\evensidemargin {63\p@} +\setlength\marginparwidth {90\p@} + +\setlength\headsep {16\p@} + +\setlength\footnotesep{7.7\p@} +\setlength\textfloatsep{8mm\@plus 2\p@ \@minus 4\p@} +\setlength\intextsep {8mm\@plus 2\p@ \@minus 2\p@} + +\setcounter{secnumdepth}{2} + +\newcounter {chapter} +\renewcommand\thechapter {\@arabic\c@chapter} + +\newif\if@mainmatter \@mainmattertrue +\newcommand\frontmatter{\cleardoublepage + \@mainmatterfalse\pagenumbering{Roman}} +\newcommand\mainmatter{\cleardoublepage + \@mainmattertrue\pagenumbering{arabic}} +\newcommand\backmatter{\if@openright\cleardoublepage\else\clearpage\fi + \@mainmatterfalse} + +\renewcommand\part{\cleardoublepage + \thispagestyle{empty}% + \if@twocolumn + \onecolumn + \@tempswatrue + \else + \@tempswafalse + \fi + \null\vfil + \secdef\@part\@spart} + +\def\@part[#1]#2{% + \ifnum \c@secnumdepth >-2\relax + \refstepcounter{part}% + \addcontentsline{toc}{part}{\thepart\hspace{1em}#1}% + \else + \addcontentsline{toc}{part}{#1}% + \fi + \markboth{}{}% + {\centering + \interlinepenalty \@M + \normalfont + \ifnum \c@secnumdepth >-2\relax + \huge\bfseries \partname~\thepart + \par + \vskip 20\p@ + \fi + \Huge \bfseries #2\par}% + \@endpart} +\def\@spart#1{% + {\centering + \interlinepenalty \@M + \normalfont + \Huge \bfseries #1\par}% + \@endpart} +\def\@endpart{\vfil\newpage + \if@twoside + \null + \thispagestyle{empty}% + \newpage + \fi + \if@tempswa + \twocolumn + \fi} + +\newcommand\chapter{\clearpage + \thispagestyle{empty}% + \global\@topnum\z@ + \@afterindentfalse + \secdef\@chapter\@schapter} +\def\@chapter[#1]#2{\ifnum \c@secnumdepth >\m@ne + \if@mainmatter + \refstepcounter{chapter}% + \typeout{\@chapapp\space\thechapter.}% + \addcontentsline{toc}{chapter}% + {\protect\numberline{\thechapter}#1}% + \else + \addcontentsline{toc}{chapter}{#1}% + \fi + \else + \addcontentsline{toc}{chapter}{#1}% + \fi + \chaptermark{#1}% + \addtocontents{lof}{\protect\addvspace{10\p@}}% + \addtocontents{lot}{\protect\addvspace{10\p@}}% + \if@twocolumn + \@topnewpage[\@makechapterhead{#2}]% + \else + \@makechapterhead{#2}% + \@afterheading + \fi} +\def\@makechapterhead#1{% +% \vspace*{50\p@}% + {\centering + \ifnum \c@secnumdepth >\m@ne + \if@mainmatter + \large\bfseries \@chapapp{} \thechapter + \par\nobreak + \vskip 20\p@ + \fi + \fi + \interlinepenalty\@M + \Large \bfseries #1\par\nobreak + \vskip 40\p@ + }} +\def\@schapter#1{\if@twocolumn + \@topnewpage[\@makeschapterhead{#1}]% + \else + \@makeschapterhead{#1}% + \@afterheading + \fi} +\def\@makeschapterhead#1{% +% \vspace*{50\p@}% + {\centering + \normalfont + \interlinepenalty\@M + \Large \bfseries #1\par\nobreak + \vskip 40\p@ + }} + +\renewcommand\section{\@startsection{section}{1}{\z@}% + {-18\p@ \@plus -4\p@ \@minus -4\p@}% + {12\p@ \@plus 4\p@ \@minus 4\p@}% + {\normalfont\large\bfseries\boldmath + \rightskip=\z@ \@plus 8em\pretolerance=10000 }} +\renewcommand\subsection{\@startsection{subsection}{2}{\z@}% + {-18\p@ \@plus -4\p@ \@minus -4\p@}% + {8\p@ \@plus 4\p@ \@minus 4\p@}% + {\normalfont\normalsize\bfseries\boldmath + \rightskip=\z@ \@plus 8em\pretolerance=10000 }} +\renewcommand\subsubsection{\@startsection{subsubsection}{3}{\z@}% + {-18\p@ \@plus -4\p@ \@minus -4\p@}% + {-0.5em \@plus -0.22em \@minus -0.1em}% + {\normalfont\normalsize\bfseries\boldmath}} +\renewcommand\paragraph{\@startsection{paragraph}{4}{\z@}% + {-12\p@ \@plus -4\p@ \@minus -4\p@}% + {-0.5em \@plus -0.22em \@minus -0.1em}% + {\normalfont\normalsize\itshape}} +\renewcommand\subparagraph[1]{\typeout{LLNCS warning: You should not use + \string\subparagraph\space with this class}\vskip0.5cm +You should not use \verb|\subparagraph| with this class.\vskip0.5cm} + +\DeclareMathSymbol{\Gamma}{\mathalpha}{letters}{"00} +\DeclareMathSymbol{\Delta}{\mathalpha}{letters}{"01} +\DeclareMathSymbol{\Theta}{\mathalpha}{letters}{"02} +\DeclareMathSymbol{\Lambda}{\mathalpha}{letters}{"03} +\DeclareMathSymbol{\Xi}{\mathalpha}{letters}{"04} +\DeclareMathSymbol{\Pi}{\mathalpha}{letters}{"05} +\DeclareMathSymbol{\Sigma}{\mathalpha}{letters}{"06} +\DeclareMathSymbol{\Upsilon}{\mathalpha}{letters}{"07} +\DeclareMathSymbol{\Phi}{\mathalpha}{letters}{"08} +\DeclareMathSymbol{\Psi}{\mathalpha}{letters}{"09} +\DeclareMathSymbol{\Omega}{\mathalpha}{letters}{"0A} + +\let\footnotesize\small + +\if@custvec +\def\vec#1{\mathchoice{\mbox{\boldmath$\displaystyle#1$}} +{\mbox{\boldmath$\textstyle#1$}} +{\mbox{\boldmath$\scriptstyle#1$}} +{\mbox{\boldmath$\scriptscriptstyle#1$}}} +\fi + +\def\squareforqed{\hbox{\rlap{$\sqcap$}$\sqcup$}} +\def\qed{\ifmmode\squareforqed\else{\unskip\nobreak\hfil +\penalty50\hskip1em\null\nobreak\hfil\squareforqed +\parfillskip=0pt\finalhyphendemerits=0\endgraf}\fi} + +\def\getsto{\mathrel{\mathchoice {\vcenter{\offinterlineskip +\halign{\hfil +$\displaystyle##$\hfil\cr\gets\cr\to\cr}}} +{\vcenter{\offinterlineskip\halign{\hfil$\textstyle##$\hfil\cr\gets +\cr\to\cr}}} +{\vcenter{\offinterlineskip\halign{\hfil$\scriptstyle##$\hfil\cr\gets +\cr\to\cr}}} +{\vcenter{\offinterlineskip\halign{\hfil$\scriptscriptstyle##$\hfil\cr +\gets\cr\to\cr}}}}} +\def\lid{\mathrel{\mathchoice {\vcenter{\offinterlineskip\halign{\hfil +$\displaystyle##$\hfil\cr<\cr\noalign{\vskip1.2pt}=\cr}}} +{\vcenter{\offinterlineskip\halign{\hfil$\textstyle##$\hfil\cr<\cr +\noalign{\vskip1.2pt}=\cr}}} +{\vcenter{\offinterlineskip\halign{\hfil$\scriptstyle##$\hfil\cr<\cr +\noalign{\vskip1pt}=\cr}}} +{\vcenter{\offinterlineskip\halign{\hfil$\scriptscriptstyle##$\hfil\cr +<\cr +\noalign{\vskip0.9pt}=\cr}}}}} +\def\gid{\mathrel{\mathchoice {\vcenter{\offinterlineskip\halign{\hfil +$\displaystyle##$\hfil\cr>\cr\noalign{\vskip1.2pt}=\cr}}} +{\vcenter{\offinterlineskip\halign{\hfil$\textstyle##$\hfil\cr>\cr +\noalign{\vskip1.2pt}=\cr}}} +{\vcenter{\offinterlineskip\halign{\hfil$\scriptstyle##$\hfil\cr>\cr +\noalign{\vskip1pt}=\cr}}} +{\vcenter{\offinterlineskip\halign{\hfil$\scriptscriptstyle##$\hfil\cr +>\cr +\noalign{\vskip0.9pt}=\cr}}}}} +\def\grole{\mathrel{\mathchoice {\vcenter{\offinterlineskip +\halign{\hfil +$\displaystyle##$\hfil\cr>\cr\noalign{\vskip-1pt}<\cr}}} +{\vcenter{\offinterlineskip\halign{\hfil$\textstyle##$\hfil\cr +>\cr\noalign{\vskip-1pt}<\cr}}} +{\vcenter{\offinterlineskip\halign{\hfil$\scriptstyle##$\hfil\cr +>\cr\noalign{\vskip-0.8pt}<\cr}}} +{\vcenter{\offinterlineskip\halign{\hfil$\scriptscriptstyle##$\hfil\cr +>\cr\noalign{\vskip-0.3pt}<\cr}}}}} +\def\bbbr{{\rm I\!R}} %reelle Zahlen +\def\bbbm{{\rm I\!M}} +\def\bbbn{{\rm I\!N}} %natuerliche Zahlen +\def\bbbf{{\rm I\!F}} +\def\bbbh{{\rm I\!H}} +\def\bbbk{{\rm I\!K}} +\def\bbbp{{\rm I\!P}} +\def\bbbone{{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} +{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}} +\def\bbbc{{\mathchoice {\setbox0=\hbox{$\displaystyle\rm C$}\hbox{\hbox +to0pt{\kern0.4\wd0\vrule height0.9\ht0\hss}\box0}} +{\setbox0=\hbox{$\textstyle\rm C$}\hbox{\hbox +to0pt{\kern0.4\wd0\vrule height0.9\ht0\hss}\box0}} +{\setbox0=\hbox{$\scriptstyle\rm C$}\hbox{\hbox +to0pt{\kern0.4\wd0\vrule height0.9\ht0\hss}\box0}} +{\setbox0=\hbox{$\scriptscriptstyle\rm C$}\hbox{\hbox +to0pt{\kern0.4\wd0\vrule height0.9\ht0\hss}\box0}}}} +\def\bbbq{{\mathchoice {\setbox0=\hbox{$\displaystyle\rm +Q$}\hbox{\raise +0.15\ht0\hbox to0pt{\kern0.4\wd0\vrule height0.8\ht0\hss}\box0}} +{\setbox0=\hbox{$\textstyle\rm Q$}\hbox{\raise +0.15\ht0\hbox to0pt{\kern0.4\wd0\vrule height0.8\ht0\hss}\box0}} +{\setbox0=\hbox{$\scriptstyle\rm Q$}\hbox{\raise +0.15\ht0\hbox to0pt{\kern0.4\wd0\vrule height0.7\ht0\hss}\box0}} +{\setbox0=\hbox{$\scriptscriptstyle\rm Q$}\hbox{\raise +0.15\ht0\hbox to0pt{\kern0.4\wd0\vrule height0.7\ht0\hss}\box0}}}} +\def\bbbt{{\mathchoice {\setbox0=\hbox{$\displaystyle\rm +T$}\hbox{\hbox to0pt{\kern0.3\wd0\vrule height0.9\ht0\hss}\box0}} +{\setbox0=\hbox{$\textstyle\rm T$}\hbox{\hbox +to0pt{\kern0.3\wd0\vrule height0.9\ht0\hss}\box0}} +{\setbox0=\hbox{$\scriptstyle\rm T$}\hbox{\hbox +to0pt{\kern0.3\wd0\vrule height0.9\ht0\hss}\box0}} +{\setbox0=\hbox{$\scriptscriptstyle\rm T$}\hbox{\hbox +to0pt{\kern0.3\wd0\vrule height0.9\ht0\hss}\box0}}}} +\def\bbbs{{\mathchoice +{\setbox0=\hbox{$\displaystyle \rm S$}\hbox{\raise0.5\ht0\hbox +to0pt{\kern0.35\wd0\vrule height0.45\ht0\hss}\hbox +to0pt{\kern0.55\wd0\vrule height0.5\ht0\hss}\box0}} +{\setbox0=\hbox{$\textstyle \rm S$}\hbox{\raise0.5\ht0\hbox +to0pt{\kern0.35\wd0\vrule height0.45\ht0\hss}\hbox +to0pt{\kern0.55\wd0\vrule height0.5\ht0\hss}\box0}} +{\setbox0=\hbox{$\scriptstyle \rm S$}\hbox{\raise0.5\ht0\hbox +to0pt{\kern0.35\wd0\vrule height0.45\ht0\hss}\raise0.05\ht0\hbox +to0pt{\kern0.5\wd0\vrule height0.45\ht0\hss}\box0}} +{\setbox0=\hbox{$\scriptscriptstyle\rm S$}\hbox{\raise0.5\ht0\hbox +to0pt{\kern0.4\wd0\vrule height0.45\ht0\hss}\raise0.05\ht0\hbox +to0pt{\kern0.55\wd0\vrule height0.45\ht0\hss}\box0}}}} +\def\bbbz{{\mathchoice {\hbox{$\mathsf\textstyle Z\kern-0.4em Z$}} +{\hbox{$\mathsf\textstyle Z\kern-0.4em Z$}} +{\hbox{$\mathsf\scriptstyle Z\kern-0.3em Z$}} +{\hbox{$\mathsf\scriptscriptstyle Z\kern-0.2em Z$}}}} + +\let\ts\, + +\setlength\leftmargini {17\p@} +\setlength\leftmargin {\leftmargini} +\setlength\leftmarginii {\leftmargini} +\setlength\leftmarginiii {\leftmargini} +\setlength\leftmarginiv {\leftmargini} +\setlength \labelsep {.5em} +\setlength \labelwidth{\leftmargini} +\addtolength\labelwidth{-\labelsep} + +\def\@listI{\leftmargin\leftmargini + \parsep 0\p@ \@plus1\p@ \@minus\p@ + \topsep 8\p@ \@plus2\p@ \@minus4\p@ + \itemsep0\p@} +\let\@listi\@listI +\@listi +\def\@listii {\leftmargin\leftmarginii + \labelwidth\leftmarginii + \advance\labelwidth-\labelsep + \topsep 0\p@ \@plus2\p@ \@minus\p@} +\def\@listiii{\leftmargin\leftmarginiii + \labelwidth\leftmarginiii + \advance\labelwidth-\labelsep + \topsep 0\p@ \@plus\p@\@minus\p@ + \parsep \z@ + \partopsep \p@ \@plus\z@ \@minus\p@} + +\renewcommand\labelitemi{\normalfont\bfseries --} +\renewcommand\labelitemii{$\m@th\bullet$} + +\setlength\arraycolsep{1.4\p@} +\setlength\tabcolsep{1.4\p@} + +\def\tableofcontents{\chapter*{\contentsname\@mkboth{{\contentsname}}% + {{\contentsname}}} + \def\authcount##1{\setcounter{auco}{##1}\setcounter{@auth}{1}} + \def\lastand{\ifnum\value{auco}=2\relax + \unskip{} \andname\ + \else + \unskip \lastandname\ + \fi}% + \def\and{\stepcounter{@auth}\relax + \ifnum\value{@auth}=\value{auco}% + \lastand + \else + \unskip, + \fi}% + \@starttoc{toc}\if@restonecol\twocolumn\fi} + +\def\l@part#1#2{\addpenalty{\@secpenalty}% + \addvspace{2em plus\p@}% % space above part line + \begingroup + \parindent \z@ + \rightskip \z@ plus 5em + \hrule\vskip5pt + \large % same size as for a contribution heading + \bfseries\boldmath % set line in boldface + \leavevmode % TeX command to enter horizontal mode. + #1\par + \vskip5pt + \hrule + \vskip1pt + \nobreak % Never break after part entry + \endgroup} + +\def\@dotsep{2} + +\def\hyperhrefextend{\ifx\hyper@anchor\@undefined\else +{chapter.\thechapter}\fi} + +\def\addnumcontentsmark#1#2#3{% +\addtocontents{#1}{\protect\contentsline{#2}{\protect\numberline + {\thechapter}#3}{\thepage}\hyperhrefextend}} +\def\addcontentsmark#1#2#3{% +\addtocontents{#1}{\protect\contentsline{#2}{#3}{\thepage}\hyperhrefextend}} +\def\addcontentsmarkwop#1#2#3{% +\addtocontents{#1}{\protect\contentsline{#2}{#3}{0}\hyperhrefextend}} + +\def\@adcmk[#1]{\ifcase #1 \or +\def\@gtempa{\addnumcontentsmark}% + \or \def\@gtempa{\addcontentsmark}% + \or \def\@gtempa{\addcontentsmarkwop}% + \fi\@gtempa{toc}{chapter}} +\def\addtocmark{\@ifnextchar[{\@adcmk}{\@adcmk[3]}} + +\def\l@chapter#1#2{\addpenalty{-\@highpenalty} + \vskip 1.0em plus 1pt \@tempdima 1.5em \begingroup + \parindent \z@ \rightskip \@pnumwidth + \parfillskip -\@pnumwidth + \leavevmode \advance\leftskip\@tempdima \hskip -\leftskip + {\large\bfseries\boldmath#1}\ifx0#2\hfil\null + \else + \nobreak + \leaders\hbox{$\m@th \mkern \@dotsep mu.\mkern + \@dotsep mu$}\hfill + \nobreak\hbox to\@pnumwidth{\hss #2}% + \fi\par + \penalty\@highpenalty \endgroup} + +\def\l@title#1#2{\addpenalty{-\@highpenalty} + \addvspace{8pt plus 1pt} + \@tempdima \z@ + \begingroup + \parindent \z@ \rightskip \@tocrmarg + \parfillskip -\@tocrmarg + \leavevmode \advance\leftskip\@tempdima \hskip -\leftskip + #1\nobreak + \leaders\hbox{$\m@th \mkern \@dotsep mu.\mkern + \@dotsep mu$}\hfill + \nobreak\hbox to\@pnumwidth{\hss #2}\par + \penalty\@highpenalty \endgroup} + +\setcounter{tocdepth}{0} +\newdimen\tocchpnum +\newdimen\tocsecnum +\newdimen\tocsectotal +\newdimen\tocsubsecnum +\newdimen\tocsubsectotal +\newdimen\tocsubsubsecnum +\newdimen\tocsubsubsectotal +\newdimen\tocparanum +\newdimen\tocparatotal +\newdimen\tocsubparanum +\tocchpnum=\z@ % no chapter numbers +\tocsecnum=15\p@ % section 88. plus 2.222pt +\tocsubsecnum=23\p@ % subsection 88.8 plus 2.222pt +\tocsubsubsecnum=27\p@ % subsubsection 88.8.8 plus 1.444pt +\tocparanum=35\p@ % paragraph 88.8.8.8 plus 1.666pt +\tocsubparanum=43\p@ % subparagraph 88.8.8.8.8 plus 1.888pt +\def\calctocindent{% +\tocsectotal=\tocchpnum +\advance\tocsectotal by\tocsecnum +\tocsubsectotal=\tocsectotal +\advance\tocsubsectotal by\tocsubsecnum +\tocsubsubsectotal=\tocsubsectotal +\advance\tocsubsubsectotal by\tocsubsubsecnum +\tocparatotal=\tocsubsubsectotal +\advance\tocparatotal by\tocparanum} +\calctocindent + +\def\l@section{\@dottedtocline{1}{\tocchpnum}{\tocsecnum}} +\def\l@subsection{\@dottedtocline{2}{\tocsectotal}{\tocsubsecnum}} +\def\l@subsubsection{\@dottedtocline{3}{\tocsubsectotal}{\tocsubsubsecnum}} +\def\l@paragraph{\@dottedtocline{4}{\tocsubsubsectotal}{\tocparanum}} +\def\l@subparagraph{\@dottedtocline{5}{\tocparatotal}{\tocsubparanum}} + +\def\listoffigures{\@restonecolfalse\if@twocolumn\@restonecoltrue\onecolumn + \fi\section*{\listfigurename\@mkboth{{\listfigurename}}{{\listfigurename}}} + \@starttoc{lof}\if@restonecol\twocolumn\fi} +\def\l@figure{\@dottedtocline{1}{0em}{1.5em}} + +\def\listoftables{\@restonecolfalse\if@twocolumn\@restonecoltrue\onecolumn + \fi\section*{\listtablename\@mkboth{{\listtablename}}{{\listtablename}}} + \@starttoc{lot}\if@restonecol\twocolumn\fi} +\let\l@table\l@figure + +\renewcommand\listoffigures{% + \section*{\listfigurename + \@mkboth{\listfigurename}{\listfigurename}}% + \@starttoc{lof}% + } + +\renewcommand\listoftables{% + \section*{\listtablename + \@mkboth{\listtablename}{\listtablename}}% + \@starttoc{lot}% + } + +\ifx\oribibl\undefined +\ifx\citeauthoryear\undefined +\renewenvironment{thebibliography}[1] + {\section*{\refname} + \def\@biblabel##1{##1.} + \small + \list{\@biblabel{\@arabic\c@enumiv}}% + {\settowidth\labelwidth{\@biblabel{#1}}% + \leftmargin\labelwidth + \advance\leftmargin\labelsep + \if@openbib + \advance\leftmargin\bibindent + \itemindent -\bibindent + \listparindent \itemindent + \parsep \z@ + \fi + \usecounter{enumiv}% + \let\p@enumiv\@empty + \renewcommand\theenumiv{\@arabic\c@enumiv}}% + \if@openbib + \renewcommand\newblock{\par}% + \else + \renewcommand\newblock{\hskip .11em \@plus.33em \@minus.07em}% + \fi + \sloppy\clubpenalty4000\widowpenalty4000% + \sfcode`\.=\@m} + {\def\@noitemerr + {\@latex@warning{Empty `thebibliography' environment}}% + \endlist} +\def\@lbibitem[#1]#2{\item[{[#1]}\hfill]\if@filesw + {\let\protect\noexpand\immediate + \write\@auxout{\string\bibcite{#2}{#1}}}\fi\ignorespaces} +\newcount\@tempcntc +\def\@citex[#1]#2{\if@filesw\immediate\write\@auxout{\string\citation{#2}}\fi + \@tempcnta\z@\@tempcntb\m@ne\def\@citea{}\@cite{\@for\@citeb:=#2\do + {\@ifundefined + {b@\@citeb}{\@citeo\@tempcntb\m@ne\@citea\def\@citea{,}{\bfseries + ?}\@warning + {Citation `\@citeb' on page \thepage \space undefined}}% + {\setbox\z@\hbox{\global\@tempcntc0\csname b@\@citeb\endcsname\relax}% + \ifnum\@tempcntc=\z@ \@citeo\@tempcntb\m@ne + \@citea\def\@citea{,}\hbox{\csname b@\@citeb\endcsname}% + \else + \advance\@tempcntb\@ne + \ifnum\@tempcntb=\@tempcntc + \else\advance\@tempcntb\m@ne\@citeo + \@tempcnta\@tempcntc\@tempcntb\@tempcntc\fi\fi}}\@citeo}{#1}} +\def\@citeo{\ifnum\@tempcnta>\@tempcntb\else + \@citea\def\@citea{,\,\hskip\z@skip}% + \ifnum\@tempcnta=\@tempcntb\the\@tempcnta\else + {\advance\@tempcnta\@ne\ifnum\@tempcnta=\@tempcntb \else + \def\@citea{--}\fi + \advance\@tempcnta\m@ne\the\@tempcnta\@citea\the\@tempcntb}\fi\fi} +\else +\renewenvironment{thebibliography}[1] + {\section*{\refname} + \small + \list{}% + {\settowidth\labelwidth{}% + \leftmargin\parindent + \itemindent=-\parindent + \labelsep=\z@ + \if@openbib + \advance\leftmargin\bibindent + \itemindent -\bibindent + \listparindent \itemindent + \parsep \z@ + \fi + \usecounter{enumiv}% + \let\p@enumiv\@empty + \renewcommand\theenumiv{}}% + \if@openbib + \renewcommand\newblock{\par}% + \else + \renewcommand\newblock{\hskip .11em \@plus.33em \@minus.07em}% + \fi + \sloppy\clubpenalty4000\widowpenalty4000% + \sfcode`\.=\@m} + {\def\@noitemerr + {\@latex@warning{Empty `thebibliography' environment}}% + \endlist} + \def\@cite#1{#1}% + \def\@lbibitem[#1]#2{\item[]\if@filesw + {\def\protect##1{\string ##1\space}\immediate + \write\@auxout{\string\bibcite{#2}{#1}}}\fi\ignorespaces} + \fi +\else +\@cons\@openbib@code{\noexpand\small} +\fi + +\def\idxquad{\hskip 10\p@}% space that divides entry from number + +\def\@idxitem{\par\hangindent 10\p@} + +\def\subitem{\par\setbox0=\hbox{--\enspace}% second order + \noindent\hangindent\wd0\box0}% index entry + +\def\subsubitem{\par\setbox0=\hbox{--\,--\enspace}% third + \noindent\hangindent\wd0\box0}% order index entry + +\def\indexspace{\par \vskip 10\p@ plus5\p@ minus3\p@\relax} + +\renewenvironment{theindex} + {\@mkboth{\indexname}{\indexname}% + \thispagestyle{empty}\parindent\z@ + \parskip\z@ \@plus .3\p@\relax + \let\item\par + \def\,{\relax\ifmmode\mskip\thinmuskip + \else\hskip0.2em\ignorespaces\fi}% + \normalfont\small + \begin{multicols}{2}[\@makeschapterhead{\indexname}]% + } + {\end{multicols}} + +\renewcommand\footnoterule{% + \kern-3\p@ + \hrule\@width 2truecm + \kern2.6\p@} + \newdimen\fnindent + \fnindent1em +\long\def\@makefntext#1{% + \parindent \fnindent% + \leftskip \fnindent% + \noindent + \llap{\hb@xt@1em{\hss\@makefnmark\ }}\ignorespaces#1} + +\long\def\@makecaption#1#2{% + \vskip\abovecaptionskip + \sbox\@tempboxa{{\bfseries #1.} #2}% + \ifdim \wd\@tempboxa >\hsize + {\bfseries #1.} #2\par + \else + \global \@minipagefalse + \hb@xt@\hsize{\hfil\box\@tempboxa\hfil}% + \fi + \vskip\belowcaptionskip} + +\def\fps@figure{htbp} +\def\fnum@figure{\figurename\thinspace\thefigure} +\def \@floatboxreset {% + \reset@font + \small + \@setnobreak + \@setminipage +} +\def\fps@table{htbp} +\def\fnum@table{\tablename~\thetable} +\renewenvironment{table} + {\setlength\abovecaptionskip{0\p@}% + \setlength\belowcaptionskip{10\p@}% + \@float{table}} + {\end@float} +\renewenvironment{table*} + {\setlength\abovecaptionskip{0\p@}% + \setlength\belowcaptionskip{10\p@}% + \@dblfloat{table}} + {\end@dblfloat} + +\long\def\@caption#1[#2]#3{\par\addcontentsline{\csname + ext@#1\endcsname}{#1}{\protect\numberline{\csname + the#1\endcsname}{\ignorespaces #2}}\begingroup + \@parboxrestore + \@makecaption{\csname fnum@#1\endcsname}{\ignorespaces #3}\par + \endgroup} + +% LaTeX does not provide a command to enter the authors institute +% addresses. The \institute command is defined here. + +\newcounter{@inst} +\newcounter{@auth} +\newcounter{auco} +\def\andname{and} +\def\lastandname{\unskip, and} +\newdimen\instindent +\newbox\authrun +\newtoks\authorrunning +\newtoks\tocauthor +\newbox\titrun +\newtoks\titlerunning +\newtoks\toctitle + +\def\clearheadinfo{\gdef\@author{No Author Given}% + \gdef\@title{No Title Given}% + \gdef\@subtitle{}% + \gdef\@institute{No Institute Given}% + \gdef\@thanks{}% + \global\titlerunning={}\global\authorrunning={}% + \global\toctitle={}\global\tocauthor={}} + +\def\institute#1{\gdef\@institute{#1}} + +\def\institutename{\par + \begingroup + \parskip=\z@ + \parindent=\z@ + \setcounter{@inst}{1}% + \def\and{\par\stepcounter{@inst}% + \noindent$^{\the@inst}$\enspace\ignorespaces}% + \setbox0=\vbox{\def\thanks##1{}\@institute}% + \ifnum\c@@inst=1\relax + \else + \setcounter{footnote}{\c@@inst}% + \setcounter{@inst}{1}% + \noindent$^{\the@inst}$\enspace + \fi + \ignorespaces + \@institute\par + \endgroup} + +\def\@fnsymbol#1{\ensuremath{\ifcase#1\or\star\or{\star\star}\or + {\star\star\star}\or \dagger\or \ddagger\or + \mathchar "278\or \mathchar "27B\or \|\or **\or \dagger\dagger + \or \ddagger\ddagger \else\@ctrerr\fi}} + +\def\inst#1{\unskip$^{#1}$} +\def\fnmsep{\unskip$^,$} +\def\email#1{{\tt#1}} +\AtBeginDocument{\@ifundefined{url}{\def\url#1{#1}}{}} +\def\homedir{\~{ }} + +\def\subtitle#1{\gdef\@subtitle{#1}} +\clearheadinfo + +\renewcommand\maketitle{\newpage + \refstepcounter{chapter}% + \stepcounter{section}% + \setcounter{section}{0}% + \setcounter{subsection}{0}% + \setcounter{figure}{0} + \setcounter{table}{0} + \setcounter{equation}{0} + \setcounter{footnote}{0}% + \begingroup + \parindent=\z@ + \renewcommand\thefootnote{\@fnsymbol\c@footnote}% + \if@twocolumn + \ifnum \col@number=\@ne + \@maketitle + \else + \twocolumn[\@maketitle]% + \fi + \else + \newpage + \global\@topnum\z@ % Prevents figures from going at top of page. + \@maketitle + \fi + \thispagestyle{empty}\@thanks +% + \def\\{\unskip\ \ignorespaces}\def\inst##1{\unskip{}}% + \def\thanks##1{\unskip{}}\def\fnmsep{\unskip}% + \instindent=\hsize + \advance\instindent by-\headlineindent + \if!\the\toctitle!\addcontentsline{toc}{title}{\@title}\else + \addcontentsline{toc}{title}{\the\toctitle}\fi + \if@runhead + \if!\the\titlerunning!\else + \edef\@title{\the\titlerunning}% + \fi + \global\setbox\titrun=\hbox{\small\rm\unboldmath\ignorespaces\@title}% + \ifdim\wd\titrun>\instindent + \typeout{Title too long for running head. Please supply}% + \typeout{a shorter form with \string\titlerunning\space prior to + \string\maketitle}% + \global\setbox\titrun=\hbox{\small\rm + Title Suppressed Due to Excessive Length}% + \fi + \xdef\@title{\copy\titrun}% + \fi +% + \if!\the\tocauthor!\relax + {\def\and{\noexpand\protect\noexpand\and}% + \protected@xdef\toc@uthor{\@author}}% + \else + \def\\{\noexpand\protect\noexpand\newline}% + \protected@xdef\scratch{\the\tocauthor}% + \protected@xdef\toc@uthor{\scratch}% + \fi + \addtocontents{toc}{{\protect\raggedright\protect\leftskip15\p@ + \protect\rightskip\@tocrmarg + \protect\itshape\toc@uthor\protect\endgraf}}% + \if@runhead + \if!\the\authorrunning! + \value{@inst}=\value{@auth}% + \setcounter{@auth}{1}% + \else + \edef\@author{\the\authorrunning}% + \fi + \global\setbox\authrun=\hbox{\small\unboldmath\@author\unskip}% + \ifdim\wd\authrun>\instindent + \typeout{Names of authors too long for running head. Please supply}% + \typeout{a shorter form with \string\authorrunning\space prior to + \string\maketitle}% + \global\setbox\authrun=\hbox{\small\rm + Authors Suppressed Due to Excessive Length}% + \fi + \xdef\@author{\copy\authrun}% + \markboth{\@author}{\@title}% + \fi + \endgroup + \setcounter{footnote}{0}% + \clearheadinfo} +% +\def\@maketitle{\newpage + \markboth{}{}% + \def\lastand{\ifnum\value{@inst}=2\relax + \unskip{} \andname\ + \else + \unskip \lastandname\ + \fi}% + \def\and{\stepcounter{@auth}\relax + \ifnum\value{@auth}=\value{@inst}% + \lastand + \else + \unskip, + \fi}% + \begin{center}% + {\Large \bfseries\boldmath + \pretolerance=10000 + \@title \par}\vskip .8cm +\if!\@subtitle!\else {\large \bfseries\boldmath + \vskip -.65cm + \pretolerance=10000 + \@subtitle \par}\vskip .8cm\fi + \setbox0=\vbox{\setcounter{@auth}{1}\def\and{\stepcounter{@auth}}% + \def\thanks##1{}\@author}% + \global\value{@inst}=\value{@auth}% + \global\value{auco}=\value{@auth}% + \setcounter{@auth}{1}% +{\lineskip .5em +\noindent\ignorespaces +\@author\vskip.35cm} + {\small\institutename} + \end{center}% + } + +% definition of the "\spnewtheorem" command. +% +% Usage: +% +% \spnewtheorem{env_nam}{caption}[within]{cap_font}{body_font} +% or \spnewtheorem{env_nam}[numbered_like]{caption}{cap_font}{body_font} +% or \spnewtheorem*{env_nam}{caption}{cap_font}{body_font} +% +% New is "cap_font" and "body_font". It stands for +% fontdefinition of the caption and the text itself. +% +% "\spnewtheorem*" gives a theorem without number. +% +% A defined spnewthoerem environment is used as described +% by Lamport. +% +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + +\def\@thmcountersep{} +\def\@thmcounterend{.} + +\def\spnewtheorem{\@ifstar{\@sthm}{\@Sthm}} + +% definition of \spnewtheorem with number + +\def\@spnthm#1#2{% + \@ifnextchar[{\@spxnthm{#1}{#2}}{\@spynthm{#1}{#2}}} +\def\@Sthm#1{\@ifnextchar[{\@spothm{#1}}{\@spnthm{#1}}} + +\def\@spxnthm#1#2[#3]#4#5{\expandafter\@ifdefinable\csname #1\endcsname + {\@definecounter{#1}\@addtoreset{#1}{#3}% + \expandafter\xdef\csname the#1\endcsname{\expandafter\noexpand + \csname the#3\endcsname \noexpand\@thmcountersep \@thmcounter{#1}}% + \expandafter\xdef\csname #1name\endcsname{#2}% + \global\@namedef{#1}{\@spthm{#1}{\csname #1name\endcsname}{#4}{#5}}% + \global\@namedef{end#1}{\@endtheorem}}} + +\def\@spynthm#1#2#3#4{\expandafter\@ifdefinable\csname #1\endcsname + {\@definecounter{#1}% + \expandafter\xdef\csname the#1\endcsname{\@thmcounter{#1}}% + \expandafter\xdef\csname #1name\endcsname{#2}% + \global\@namedef{#1}{\@spthm{#1}{\csname #1name\endcsname}{#3}{#4}}% + \global\@namedef{end#1}{\@endtheorem}}} + +\def\@spothm#1[#2]#3#4#5{% + \@ifundefined{c@#2}{\@latexerr{No theorem environment `#2' defined}\@eha}% + {\expandafter\@ifdefinable\csname #1\endcsname + {\global\@namedef{the#1}{\@nameuse{the#2}}% + \expandafter\xdef\csname #1name\endcsname{#3}% + \global\@namedef{#1}{\@spthm{#2}{\csname #1name\endcsname}{#4}{#5}}% + \global\@namedef{end#1}{\@endtheorem}}}} + +\def\@spthm#1#2#3#4{\topsep 7\p@ \@plus2\p@ \@minus4\p@ +\refstepcounter{#1}% +\@ifnextchar[{\@spythm{#1}{#2}{#3}{#4}}{\@spxthm{#1}{#2}{#3}{#4}}} + +\def\@spxthm#1#2#3#4{\@spbegintheorem{#2}{\csname the#1\endcsname}{#3}{#4}% + \ignorespaces} + +\def\@spythm#1#2#3#4[#5]{\@spopargbegintheorem{#2}{\csname + the#1\endcsname}{#5}{#3}{#4}\ignorespaces} + +\def\@spbegintheorem#1#2#3#4{\trivlist + \item[\hskip\labelsep{#3#1\ #2\@thmcounterend}]#4} + +\def\@spopargbegintheorem#1#2#3#4#5{\trivlist + \item[\hskip\labelsep{#4#1\ #2}]{#4(#3)\@thmcounterend\ }#5} + +% definition of \spnewtheorem* without number + +\def\@sthm#1#2{\@Ynthm{#1}{#2}} + +\def\@Ynthm#1#2#3#4{\expandafter\@ifdefinable\csname #1\endcsname + {\global\@namedef{#1}{\@Thm{\csname #1name\endcsname}{#3}{#4}}% + \expandafter\xdef\csname #1name\endcsname{#2}% + \global\@namedef{end#1}{\@endtheorem}}} + +\def\@Thm#1#2#3{\topsep 7\p@ \@plus2\p@ \@minus4\p@ +\@ifnextchar[{\@Ythm{#1}{#2}{#3}}{\@Xthm{#1}{#2}{#3}}} + +\def\@Xthm#1#2#3{\@Begintheorem{#1}{#2}{#3}\ignorespaces} + +\def\@Ythm#1#2#3[#4]{\@Opargbegintheorem{#1} + {#4}{#2}{#3}\ignorespaces} + +\def\@Begintheorem#1#2#3{#3\trivlist + \item[\hskip\labelsep{#2#1\@thmcounterend}]} + +\def\@Opargbegintheorem#1#2#3#4{#4\trivlist + \item[\hskip\labelsep{#3#1}]{#3(#2)\@thmcounterend\ }} + +\if@envcntsect + \def\@thmcountersep{.} + \spnewtheorem{theorem}{Theorem}[section]{\bfseries}{\itshape} +\else + \spnewtheorem{theorem}{Theorem}{\bfseries}{\itshape} + \if@envcntreset + \@addtoreset{theorem}{section} + \else + \@addtoreset{theorem}{chapter} + \fi +\fi + +%definition of divers theorem environments +\spnewtheorem*{claim}{Claim}{\itshape}{\rmfamily} +\spnewtheorem*{proof}{Proof}{\itshape}{\rmfamily} +\if@envcntsame % alle Umgebungen wie Theorem. + \def\spn@wtheorem#1#2#3#4{\@spothm{#1}[theorem]{#2}{#3}{#4}} +\else % alle Umgebungen mit eigenem Zaehler + \if@envcntsect % mit section numeriert + \def\spn@wtheorem#1#2#3#4{\@spxnthm{#1}{#2}[section]{#3}{#4}} + \else % nicht mit section numeriert + \if@envcntreset + \def\spn@wtheorem#1#2#3#4{\@spynthm{#1}{#2}{#3}{#4} + \@addtoreset{#1}{section}} + \else + \def\spn@wtheorem#1#2#3#4{\@spynthm{#1}{#2}{#3}{#4} + \@addtoreset{#1}{chapter}}% + \fi + \fi +\fi +\spn@wtheorem{case}{Case}{\itshape}{\rmfamily} +\spn@wtheorem{conjecture}{Conjecture}{\itshape}{\rmfamily} +\spn@wtheorem{corollary}{Corollary}{\bfseries}{\itshape} +\spn@wtheorem{definition}{Definition}{\bfseries}{\itshape} +\spn@wtheorem{example}{Example}{\itshape}{\rmfamily} +\spn@wtheorem{exercise}{Exercise}{\itshape}{\rmfamily} +\spn@wtheorem{lemma}{Lemma}{\bfseries}{\itshape} +\spn@wtheorem{note}{Note}{\itshape}{\rmfamily} +\spn@wtheorem{problem}{Problem}{\itshape}{\rmfamily} +\spn@wtheorem{property}{Property}{\itshape}{\rmfamily} +\spn@wtheorem{proposition}{Proposition}{\bfseries}{\itshape} +\spn@wtheorem{question}{Question}{\itshape}{\rmfamily} +\spn@wtheorem{solution}{Solution}{\itshape}{\rmfamily} +\spn@wtheorem{remark}{Remark}{\itshape}{\rmfamily} + +\def\@takefromreset#1#2{% + \def\@tempa{#1}% + \let\@tempd\@elt + \def\@elt##1{% + \def\@tempb{##1}% + \ifx\@tempa\@tempb\else + \@addtoreset{##1}{#2}% + \fi}% + \expandafter\expandafter\let\expandafter\@tempc\csname cl@#2\endcsname + \expandafter\def\csname cl@#2\endcsname{}% + \@tempc + \let\@elt\@tempd} + +\def\theopargself{\def\@spopargbegintheorem##1##2##3##4##5{\trivlist + \item[\hskip\labelsep{##4##1\ ##2}]{##4##3\@thmcounterend\ }##5} + \def\@Opargbegintheorem##1##2##3##4{##4\trivlist + \item[\hskip\labelsep{##3##1}]{##3##2\@thmcounterend\ }} + } + +\renewenvironment{abstract}{% + \list{}{\advance\topsep by0.35cm\relax\small + \leftmargin=1cm + \labelwidth=\z@ + \listparindent=\z@ + \itemindent\listparindent + \rightmargin\leftmargin}\item[\hskip\labelsep + \bfseries\abstractname]} + {\endlist} +\renewcommand{\abstractname}{Abstract.} +\renewcommand{\contentsname}{Table of Contents} +\renewcommand{\figurename}{Fig.} +\renewcommand{\tablename}{Table} + +\newdimen\headlineindent % dimension for space between +\headlineindent=1.166cm % number and text of headings. + +\def\ps@headings{\let\@mkboth\@gobbletwo + \let\@oddfoot\@empty\let\@evenfoot\@empty + \def\@evenhead{\normalfont\small\rlap{\thepage}\hspace{\headlineindent}% + \leftmark\hfil} + \def\@oddhead{\normalfont\small\hfil\rightmark\hspace{\headlineindent}% + \llap{\thepage}} + \def\chaptermark##1{}% + \def\sectionmark##1{}% + \def\subsectionmark##1{}} + +\def\ps@titlepage{\let\@mkboth\@gobbletwo + \let\@oddfoot\@empty\let\@evenfoot\@empty + \def\@evenhead{\normalfont\small\rlap{\thepage}\hspace{\headlineindent}% + \hfil} + \def\@oddhead{\normalfont\small\hfil\hspace{\headlineindent}% + \llap{\thepage}} + \def\chaptermark##1{}% + \def\sectionmark##1{}% + \def\subsectionmark##1{}} + +\if@runhead\ps@headings\else +\ps@empty\fi + +\setlength\arraycolsep{1.4\p@} +\setlength\tabcolsep{1.4\p@} + +\endinput diff --git a/helm/papers/calculemus-2003/outline.txt b/helm/papers/calculemus-2003/outline.txt new file mode 100644 index 000000000..d349353fa --- /dev/null +++ b/helm/papers/calculemus-2003/outline.txt @@ -0,0 +1,53 @@ +1) Introduzione + - web services + - monet + - svariati WS per CAS e TP + - proof assistant + - bassa automazione + - application centric no client/server + ==>> proof assistants come WS client + - omega ants +2) Architettura + - figura 7.1 mia tesi + - attori (client, broker, tutor) + - ruoli e funzionalita' di ognuno di essi (breve) + - mapping con gli attori di monet +3) Dettagli + 3.1) codifica dello stato e dei suggerimenti + - oss: l'architettura e' generica non dipenda da una particolare codifica di + dimostrazioni + - interfaccia verso il broker + 3.2) client + - proof assistant + 3.3) broker + - registri + - interfaccia verso i client (registrazione, sottoscrizione, state change, + deregistrazione) + - interfaccia verso i tutor (registrazione, suggerimento, deregistrazione) + 3.4) tutor + - cosa puo' implementare un tutor + - nessuna assunzione sulla correttezza del suggerimento + - interfaccia verso il broker (start/stop musing) + - thread (?) +4) Sample session + - screen shot +5) Tutor implementati + - stupidi + - (1-1) con le tattiche del proof assistant + - generazione automatica + - searchPatternApply +6) Conclusioni e sviluppi futuri + - success story (gTopLevel) + - tattiche pesanti + ==>> distribuzione + - scelta troppo larga per i neofiti + ==>> suggerimenti + - utile per power user (distribuzione) + - utili per newbie (apprendimento) + - sviluppi futuri + - recursion? + - rating dei risultati + - risposte negative (disabilitare tattiche) + - blackboard? + - GUI + - descrizione monet -- 2.39.2