Delift used to produce not well typed substitutions. In turn, this
led to tactics that were not failing immediately, but only at Qed.
Fixed by adding the appropriate check to the delift function.
Note:
- pass the metavariable number (-1) to avoid the check when you know what
you are doing (e.g. when delift is just used to restrict a term or to
replace a term with a convertible one). This used to be 0 (not -1).
- in some cases (algebraic universes and implicits used to stop chains of
metavariables) the check is relaxed
- to make the CerCo script pass (in very "reasonable" and "frequent" cases),
the algorithm needs to solve the following unification problem:
S (?[n]) == ?[(S n)]
Our algorithm used to delift S(?[n]) w.r.t. (S n), yielding x |- ? := x.
By chance, this was the right solution since S (?[n]) becomes (S n) and
also ?[(S n)] becomes (S n). However, the two solutions were both added
to the subst (first bug) and they could even be different (second bug).
We fix this by checking that they are not different.
However, at the moment, the solution still occurs twice in the subst....
Ferruccio Guidi [Wed, 1 Jun 2011 11:55:11 +0000 (11:55 +0000)]
subst.ma: some additions
convertibility.ma: non "conv" refers to the correct predicate
CC2FO_K.ma: CC to Fomega: first interpretation: substitution lemma
Serious bug fixed:
- in order to look for coercions to sort and funclass, we passed
types containing implicits and a flag to avoid unification
- totally avoiding unification, it happened
1) that we diverged:
t : A :> B ==> inject t : Sigma x.A :> B ==> eject (inject t): A :> B
2) that we called unification on types that contained implicits
(in some cases)
Wilmer Ricciotti [Thu, 12 May 2011 16:03:08 +0000 (16:03 +0000)]
Record projections:
no need to synthesize an ast term from a CIC term: just use NotationPt.NCic ;
this also fixes a bug that used to prevent generation of projections when the
types of some fields was associated to a notation (example: natural numbers
expressed using \naturals).
Wilmer Ricciotti [Thu, 12 May 2011 15:59:10 +0000 (15:59 +0000)]
Record projections:
no need to synthesize an ast term from a CIC term: just use NotationPt.NCic ;
this also fixes a bug that used to prevent generation of projections when the
types of some fields was associated to a notation (example: natural numbers
expressed using \naturals).
Patch to avoid equations of the form ? -> T (oriented this way) that
can always be applied to later yield non well typed terms.
Moreover, when T is a Leaf and when the goal contains a Leaf, the equation
above is applied breaking one of the "assert (subst=[])" in the code
(since the Leaf is matched by ?).
Wilmer Ricciotti [Wed, 30 Mar 2011 11:52:27 +0000 (11:52 +0000)]
Keeping track of locations of disambiguated ids and symbols.
Matitac produces disambiguated files, with hyperlinks for ids and ยง marks
for symbols.
Matita is able to reparse ids with hypelinks.
1) Second half of the bug fixing for the "lexical keywords lost" bug.
I expected the need for re-building the lexer also after the Notation command
and indeed I have found an example for this in CerCo.
2) Printing of compilation result and times required fixed.
Ferruccio Guidi [Tue, 22 Mar 2011 18:53:26 +0000 (18:53 +0000)]
- lambda_notation.ma: more notation and bug fixes
- terms.ma: contains the data structure of terms and related functions (not involving substitution).
- ext_lambda.ma: cut off of previous ext.ma, containing the lambda-related objects
- subst.ma, types.ma: we removed notation from here
- types.ma: we added special cases of the weakening and thinning lemmas as axioms
Bug fixed and code refactoring: now both matitac and matita include files
correctly by re-generating ~include_paths in the same way and every time a
file is opened (either by matitaScript or by assert_ng itself).
Bug description:
- matitac (more precisely, MatitaEngine.assert_ng, hence the "include"
command) parses files differently from Matita. In particular, it works on
a camlp5 "Grammar.parsable" lexer, which is a lexer that remembers
look-ahead tokens.
- an "include" file can change the list of keywords for the lexer
(e.g. when defining the "if-then-else" notation). Hence, after an include,
the lexer to be used must change and thus the corresponding
"Grammar.parsable" should change too. This was not the case since the
"Grammar.parsable" was not regenerated to avoid loosing the look-ahead
tokens
Bug avoidance:
- we assume that the "include" command is properly terminated. Hence no
look-ahead is performed. After each "include" command we regenerate
the lexer to avoid the bug.
Note:
- why don't we need to do this just after a "notation" command?
Apparently, this is not required. However, I do not understand why.
Is this to be better understood in the future?