Patch to avoid equations of the form ? -> T (oriented this way) that
can always be applied to later yield non well typed terms.
Moreover, when T is a Leaf and when the goal contains a Leaf, the equation
above is applied breaking one of the "assert (subst=[])" in the code
(since the Leaf is matched by ?).
Wilmer Ricciotti [Wed, 30 Mar 2011 11:52:27 +0000 (11:52 +0000)]
Keeping track of locations of disambiguated ids and symbols.
Matitac produces disambiguated files, with hyperlinks for ids and ยง marks
for symbols.
Matita is able to reparse ids with hypelinks.
1) Second half of the bug fixing for the "lexical keywords lost" bug.
I expected the need for re-building the lexer also after the Notation command
and indeed I have found an example for this in CerCo.
2) Printing of compilation result and times required fixed.
Ferruccio Guidi [Tue, 22 Mar 2011 18:53:26 +0000 (18:53 +0000)]
- lambda_notation.ma: more notation and bug fixes
- terms.ma: contains the data structure of terms and related functions (not involving substitution).
- ext_lambda.ma: cut off of previous ext.ma, containing the lambda-related objects
- subst.ma, types.ma: we removed notation from here
- types.ma: we added special cases of the weakening and thinning lemmas as axioms
Bug fixed and code refactoring: now both matitac and matita include files
correctly by re-generating ~include_paths in the same way and every time a
file is opened (either by matitaScript or by assert_ng itself).
Bug description:
- matitac (more precisely, MatitaEngine.assert_ng, hence the "include"
command) parses files differently from Matita. In particular, it works on
a camlp5 "Grammar.parsable" lexer, which is a lexer that remembers
look-ahead tokens.
- an "include" file can change the list of keywords for the lexer
(e.g. when defining the "if-then-else" notation). Hence, after an include,
the lexer to be used must change and thus the corresponding
"Grammar.parsable" should change too. This was not the case since the
"Grammar.parsable" was not regenerated to avoid loosing the look-ahead
tokens
Bug avoidance:
- we assume that the "include" command is properly terminated. Hence no
look-ahead is performed. After each "include" command we regenerate
the lexer to avoid the bug.
Note:
- why don't we need to do this just after a "notation" command?
Apparently, this is not required. However, I do not understand why.
Is this to be better understood in the future?
Ferruccio Guidi [Sun, 27 Feb 2011 18:27:15 +0000 (18:27 +0000)]
- rc_sat.ma: we changed the notation for extensional equality. we now
use \cong like in geometry
- rc_eval.ma (second part of former rc_ma): we completed the evaluation
by adding a stack, and we proved weakening and thinning for it
Ferruccio Guidi [Sun, 27 Feb 2011 15:31:29 +0000 (15:31 +0000)]
- notation is now in a separate file
- sn.ma: we updated the axiomatization of SN while correcting the saturation
conditions
- rc_sat.ma: we proved that depRC is a candidate
Ferruccio Guidi [Sat, 26 Feb 2011 20:28:06 +0000 (20:28 +0000)]
- new file ext.ma with the objects needed for the normalization so
far, that should be removed or should go into other files
- sn.ma: we parametrized the saturation conditions
- rc_sat.ma (first part of former rc.ma): we introduced extensional
equality on candidates
Ferruccio Guidi [Mon, 21 Feb 2011 19:05:11 +0000 (19:05 +0000)]
we started to set up the strong normalization proof.
we plan to use saturated subsets of strongly normalizing terms
(see for instance D. Cescutti 2001, but a better citation is required)
rather than reducibility candidates.
The benefit is that reduction is not needed to define such subsets.
Also, this approach was never tried on a system with dependent types.
Wilmer Ricciotti [Wed, 19 Jan 2011 11:29:14 +0000 (11:29 +0000)]
Added some typing info to elimination principles, allowing the refiner
to succeed in more cases, also speeding up the generation process (no
more slow record definitions).
Wilmer Ricciotti [Wed, 19 Jan 2011 11:06:23 +0000 (11:06 +0000)]
Added some typing info to elimination principles, allowing the refiner
to succeed in more cases, also speeding up the generation process (no
more slow record definitions).
Bug fixed: the newScript method must be called only after registering the
gui. I have moved the invocation of self#newScript from the initializer to the
instance() function to solve the issue (an ugly gtk error when Matita starts).
1) sequent2pres merged into content2pres
2) more functions to render sequents/context/substs to content and then to
presentation
3) new virtual class NCic.status with methods implementing pretty-printing
functions (same interface that used to be given in NCicPp); objects of this
class are now used REALLY ALL OVER the matita code (well, almost...)
4) NCicPp implements a concrete version of NCic.status
(low level pretty-printer)
5) ApplyTransformation implements a concrete version of NCic.status
(high level pretty-printer that uses notation). It also exports the boolean
reference to deactivate the high level pretty printer in favour of the
low level one
6) some code simplifications here and there (in particular for tactics)
7) return type of BoxPp changed to yield informations about hyperlinks;
the information is not used yet by the interface and it is not computed
yet (not that easy to do...)
8) other minor stuff here and there
Ferruccio Guidi [Sun, 2 Jan 2011 15:47:01 +0000 (15:47 +0000)]
- ld-html-root: ported to permanent lambda-delta url
- ld.dtd: we can specify a language encoding for the metalinguistic
annotation, moreover we export the metalinguistic classification
- the other files are modified in order to support this feature
1. reset of statuses simplified
2. NCicEnvironment.invalidate_all (that used to hide a bug in invalidation)
removed. The semantics of the MTI is now more shacky, but this needs to be
fixed in other ways.
3. {lexicon_,grafite_,...}status renamed into status