emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Why tree-sitter instead of Semantic? (was Re: CC Mode with font-lock


From: Eric Ludlam
Subject: Re: Why tree-sitter instead of Semantic? (was Re: CC Mode with font-lock-maximum-decoration 2)
Date: Sat, 20 Aug 2022 09:15:36 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.11.0

On 8/18/22 8:34 AM, Lynn Winebarger wrote:
On Tue, Aug 16, 2022 at 9:41 PM Eric Ludlam <ericludlam@gmail.com> wrote:
On 8/16/22 1:40 PM, Lynn Winebarger wrote:
On Tue, Aug 16, 2022 at 1:19 PM Stefan Monnier <monnier@iro.umontreal.ca> wrote:
I'm only saying there's a disconnect between Jostein's report and Po's
response.  It's probably a UI issue.  There's a checkbox in a dropdown
menu that says "Source Code Parsers (Semantic)".
FWIW, I've used (semantic-mode 1) to enable CEDET in Emacs's C source
files and that was all that was needed to get TAB completion of struct
field's names working.
I haven't used it for much more than that, admittedly.
It also works for me, but I also have been mostly looking at Emacs
source with it, and Semantic knows how to use the TAGS file for
context-sensitive completion in C.  And something is working
gangbusters in Elisp, but unfortunately I can't really identify which
package is doing the work.

*  "${" and "{" could both open a block closed by "}"
Why do you think it's a problem?
If you want the lexer to tokenize the ${ as a symbol while still
recognizing the text in between as delimited, it seems like a problem.
    I mean, I already deal with that in ordinary font-lock, I was hoping
the parser/lexer generation would address the issue independently of
syntax tables.
Lexers are built per-language from a set of analyzers.  Thus, you call
(define-lex ...) and list a bunch of analyzers, which are created with
`define-lex-analyzer' or one of the variants.

The analyzers mostly use regular expressions, and when possible, uses
expressions that use the syntax table because they are quite fast.  If
you restrict yourself to the built-in named lexer analyzers, like
'semantic-lex-whitespace', then that is what they are, but you can use
`define-lex-analyzer' or `define-lex-regex-analyzer' and write any code
you want to do a match, push a token, and find the end point.  The C
lexer/parser does this a lot.

For a very simple case like matching ${:
(define-lex-simple-regex-analyzer my-dollar-curly
   "doc string"
   "\\$\\{" 'dollar-curly)

and then put this in front of the { } block analyzer when you build up
your lexer.
Thanks for the details.  I'm not sure what you mean by "put this in
front of the ... block analyzer" though.  I just don't understand how
the different token types interact with each other and/or the "block"
(or other) construct well enough to confidently use the built-in
types.
What I will take away here is that I can closely review the C
lexer/parser to see how someone who does understand the interaction of
those types uses them effectively, before investing a lot of time
studying the construction of the built-in types for the purpose of
extending them.  Which I'm not sure I would do for the problem I'm
currently dealing with in any case.
Am I right that the "block" classification is used to allow Semantic
to localize the impact of unparseable text?  It sounds like the system
will still function without explicitly declaring block constructs, but
some useful features might be effectively disabled.
Building a lexer is done in two steps.  In one step, you would build some analyzers for specific matches such as the example above.  Once you have a set of analyzers for specific syntaxes, you assemble them into a lexer, like this:

(define-lex my-lexer
  "Doc string"
  semantic-lex-ignore-whitespace
  ;; Custom stuff that conflicts with blocks
  my-dollar-curly
  ;; Do some blocks
  semantic-lex-paren-or-list
  semantic-lex-close-paren
  ;; Other stuff
  semantic-lex-number
  ;; End with this
  semantic-lex-default-action)
Hopefully this explains the basics of building out some analyzers and your lexer.

If you are building out a lexer just to do some tokenizing, then this is about what you need, plus what is in the documentation for more details.

If you want to build a parser that sits on the lexer, there is more to it, as I recommend using the wisent parser-generator, as it creates faster parsers.  In the wisent .wy files, you define %tokens using a bison-like syntax, and that in turns builds analyzers that you include in your lexer.  The java parser & lexer has a lot of cases, though the calc parser is smaller and easier to grok.

The purpose of 'block' constructs in the lexer is to just cut-out large chunks of text that you don't have to write a parser generator for.  My goal was creating tags, and parsing the body of a function, for example, is not needed.  Thus using the lexer to skip all that speeds things up.  If you want to parse the ENTIRE file, just don't put blocks in your lexer, and only put in the open/close paren analyzers.

Hope this helps.
Eric





reply via email to

[Prev in Thread] Current Thread [Next in Thread]