bison-patches
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: several messages


From: Akim Demaille
Subject: Re: several messages
Date: Mon, 7 Sep 2009 20:14:23 +0200

Hi Joel!

Le 3 sept. 09 à 07:35, Joel E. Denny a écrit :

Hi Akim.

On Wed, 2 Sep 2009, Akim Demaille wrote:

I played with canonical-lr, but it is way too expensive for me, both in terms of size, and duration of compilation (maybe in a release mode, but...).

By compilation, you mean gcc?  Or is bison slow?

I meant bison.

But
toying with lr.default-reductions (still in lalr) gives very good results.

I have 382 LR(0) states, and 12941 is canonical LR(1).

That's quite a jump. I usually see only one more digit. There must be many similar contexts within your grammar. What does IELR(1) give you?
How long does bison take to run for IELR(1) versus LALR(1)?

My grammar has:
- 185 rules
- 104 terminals
- 58 nonterminals

Here are the figures:

bison=./_build/debug/bison/tests/bison
src=$HOME/src/kernel
build=$src/_build/debug
includes="-I$build/src -I$src/src -I$src/include -I$src/sdk-remote/ include -I$build/sdk-remote/libport/include -I$src/sdk-remote/ libport/include -I$build/include -I$src/kernel -I/opt/local/include"
for lr in lalr ielr canonical-lr
do
 for red in all consistent accepting
 do
   echo
   echo $lr-$red
   TIMEFMT="Bison %E real  %U user  %S system"
time $bison --trace=time -Dlr.type=$lr -Dlr.default-reductions= $red src/parser/ugrammar.y -o /tmp/ugrammar.cc
   TIMEFMT="G++   %E real  %U user  %S system"
time g++ -DHAVE_CONFIG_H $=includes -c /tmp/ugrammar.cc -o /tmp/ ugrammar.o $bison -Dlr.type=$lr -Dlr.default-reductions=$red src/parser/ ugrammar.y -o /tmp/ugrammar.cc -rall
   sed -n '/^state [0-9]*$/h;${x;p;};' /tmp/ugrammar.output
   gls -sh /tmp/ugrammar.*
 done
done |& tee log.txt

Attachment: log.txt
Description: Text document



It would be nice to extend the *.output file with various metrics about the grammar.

Here are the size of the parser files.

address@hidden ~/src/gnet/kernel $ ls -l _build/*/ugrammar.cc
12:40:05
-rw-r--r-- 1 akim akim 321287 Sep 2 12:10 _build/accepting/ ugrammar.cc
-rw-r--r--  1 akim  akim   170357 Sep  2 12:13 _build/all/ugrammar.cc
-rw-r--r-- 1 akim akim 8913314 Sep 2 11:52 _build/canonical-lr/ ugrammar.cc -rw-r--r-- 1 akim akim 263165 Sep 2 12:06 _build/consistent/ ugrammar.cc

It's interesting that each value for lr.default-reductions has a
significant impact on size, but at least it's not orders of magnitude as
for canonical LR.

Yes, agreed.

The syntax error on "1 1" behaves as expected with both accepting and
consistent (i.e., mute since there are too many possibilities ;).

If you could unmute it (by adding a few more possibilities in the
skeleton), it would be interesting to see if canonical LR gives the same
expected tokens list.

One way out of the fixed arity would be to use "note: " lines, as Alex suggested it (and I'm slowly changing my mind on this regard).


So I will proceed with default-reductions=all on space-starving targets (on which anyway I use simple error messages to get rid of the tables), and
"accepting" otherwise.

Why not "consistent"? For error messages, I don't know of any reason to
expect either to typically be better than the other.  But "consistent"
produces the traditional Yacc behavior (actually defined in POSIX) of not requesting a token from yylex until necessary. This can affect semantic
actions.

I don't see value in this feature in my case. I guess that this is precious in languages like C where the token kind of identifiers depends on the context.

The only reason I made canonical LR choose "accepting" by default is
because I think of that as being the canonical form.

OK, I'll use consistent, thanks!  It is significantly smaller.


The documentation for error-verbose should probably
promote the use of accepting/consistent.

They can sometimes worsen the error messages.  That is, in some cases,
lookahead merging might produce worse effects on the expected list in the state containing the default reduction than is produced by performing the
default reduction.  For example, consider the input "aaz" for this
grammar:

%token 'z';
S: 'a' A 'a'
 | 'b' A B
 ;
A: 'a' ;
B: 'a' | 'b' | 'c' | 'd' ;

Wow.  You do have a collection of interesting grammars at hand :)

With lr.type=canonical-lr or lr.type=lalr:

syntax error, unexpected 'z', expecting 'a'

But with lr.type=lalr and lr.default-reductions=accepting:

syntax error, unexpected 'z', expecting 'a' or 'b' or 'c' or 'd'

If we add:

%left 'a';
A: 'a' 'a';

Then lr.default-reductions=consistent has the same problem.

My suspicion would have been that disabling default reductions almost
always does more good than harm, but I think we need more exploration.

So if we can't trust static tables, should we actually try each possible token one after the other, and see if one can be shifted at some point? Of course without touching the stack, that can be quite a pain to write :( But it should not be really different from our error- recovery code.

Do I also understand that we neither have a superset nor a subset, just some fuzzyset?


Maybe that would also explain my incorrect
messages in the nearby thread about semantic error messages.

I skimmed that, and I would guess that you need canonical LR. I haven't
explored your grammar yet though.

I don't think you have the grammar I was referring too, or maybe an old
version of it.

For some reason, I thought you meant the calc++ example grammar,

Maybe it features the same phenomenon indeed, I didn't check.

so that's
what I looked at.

I can send ugrammar.y privately, if you wish.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]