[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Tokenizing

From: Vladimir Kazanov
Subject: Re: Tokenizing
Date: Sat, 20 Sep 2014 19:40:53 +0300

> Tokenizing the whole buffer after any change is easily fast enough (on
> modern hardware), even on a 7000 line buffer. Semantic parsing gets a
> lot slower.

This is what I do right now in my prototype of a smarter Python mode.
The tokenizing process itself is usually fast enough. But parsing is
more complicated, and may take some time to rebuild the parse tree.
Incremental approach is a natural step here.

Yours sincerely,

Vladimir Kazanov

С уважением,

Владимир Казанов

reply via email to

[Prev in Thread] Current Thread [Next in Thread]