|
From: | Hans Aberg |
Subject: | Re: Performance impact of "redundant" rules |
Date: | Tue, 14 Mar 2006 20:15:38 +0100 |
On 7 Mar 2006, at 18:42, Sylvain Schmitz wrote:
Too many states means less chances of merging concurrent stacks. One could also point out that less inadequate states means less branching,but the number of inadequate states avoided when using LR(1) instead ofLALR(1) is not very big, whereas the difference in automaton size is exponential. Thus LR(1) is much more practicable if you merge states.
My guess is that this exponentiality of states refers to the difference between LR(0) and LR(1). There is a similar exponentiality in the NFA to DFA translation. Then there are two methods around this problem: the common, ignoring the issue, as typical regular expression grammars do not hit this exponentiality. Alternatively, one can make a runtime NFA-DFA translation, caching the states, though I have never seen such an implementation.
So, as for LR(1), I guess for typical grammars there will be a blow- up a couple of times of the number of states. If there is a problem with exponential blowups, one might perhaps try the same approach, make a grammar to non-deterministic automata translation, with a runtime component computing the final deterministic push-down automaton states at need, cashing them.
As for Bison, I think it should at some point in the future have a LR (1) feature, so that one at least can experiment with it.
Hans Aberg
[Prev in Thread] | Current Thread | [Next in Thread] |