emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: lexbind ready for merge


From: Stefan Monnier
Subject: Re: lexbind ready for merge
Date: Wed, 30 Mar 2011 10:26:10 -0400
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.0.50 (gnu/linux)

>> be like.  And I don't like either of them: apply-partially is just a way
>> to make it easy to build closures by hand without resorting to
[...]
> apply-partially at least involves less typing lexical-let, which (as

Yes, I see we agree.

> As for funcall-partially: I never really liked how apply-partially differs
> from apply.  The latter treats its last argument specially, while the former
> does not.  If I had my way, I'd rename the current apply-partially to
> funcall-partially (or partial-funcall?) and create a new apply-partially
> that unpacks its last argument.  But it probably can't be changed now...

I see your point.  I chose the name `apply-partially' because in the
functional programming community this is generally called a "partial
application", but indeed it clashes somewhat with the use of `apply' in
Lisp which doesn't just mean "function application".
I think it's too late to fix it, tho.

>>> - It might be a good idea to remove the "Once Emacs 19 becomes standard..."
>>> comment from cl.el
>> Feel free to do that on the trunk, I don't think it's really related
>> to lexbind.
> If I could, I would. :-)

Why don't you request write access via Savannah?  That would also make
it easier for you to maintain js.el.

>> They can *almost* be turned into `let' and `let*', except that
>> (lexical-let ((buffer-file-name 3)) ...) will bind buffer-file-name
>> lexically whereas `let' will always bind it dynamically.  We could
>> either ignore those issues or try to handle them, but I'd rather just
>> mark lexical-let obsolete.
> I'd prefer to ignore the issues for now and transform lexical-let to let
> when lexical-binding is on, generating a compiler error if we're trying to
> lexically bind a special variable.  I don't think many people try to do
> that.  A macro that uses a lexical binding in its generated code still needs
> to use lexical-let in order for its generated form to work properly in
> either environment, and even outside macros, making lexical-let cheap in the
> lexbound case gives us a way to create backward-compatible code that
> automatically becomes more efficient in Emacs 24.

I'd tend to agree, but see below.

>> (Of course, there's also the difficulty for the macro to reliably
>> determine whether the expansion will be run in lexical-binding or
>> dynamic-binding mode).
> Wouldn't inspecting the value of lexical-binding work well enough?

Sadly, that is not reliable: it only indicates whether the code found in
the current buffer uses lexical-binding, but the macro-call might be in
a function defined in some other file/buffer.

>>> - lexical-binding only applies to code evaluated by `eval-buffer' and
>>> eval-region'?! So I can't make code evaluated by M-: lexbound?
>> ?!?  AFAIK, M-: uses lexical or dynamic scoping according to the value
>> of lexical-binding in the current buffer.
> It does, but the documentation string still gives me the impression that it
> wouldn't be.

What part of its docstring gives you this impression?

> Why isn't lexical-binding respected for all evaluation?

I tried to patch all places where it needs to be used.  If I missed
some, please point them out.

>> Currently, I think the best way to do that is to add the feature to the
>> byte-compiler.  The most promising avenue for it might be to use code
>> of the form ((closure ()<formalargs>  <lexbindcode>)<actualargs>) and
>> compile efficiently (I think currently such code will either result in
>> a complaint about a malformed function, or else will leave the function
>> uncompiled).

> Ah, I see what you mean.  Would a with-lexical-scope form suffice?

> ;; with-lexical-scope is itself compiled with lexical-binding t
> (defmacro* with-lexical-scope (&body body)
>   (let* ((vars (remove-if (lambda (var)
>                             (or (special-variable-p var)
>                                 (not (boundp var))))
>                           (find-free-vars `(progn ,@body))))
>          (closure `(closure (t) ; no environment
>                   ,@args ,@body)))
>     `(funcall ,closure ,@vars)))

I think I'd rather just do:

(defmacro with-lexical-scope (&rest body)
  `((closure (t) () ,@body)))

That avoids several problems:
- no need to implement find-free-vars (tho it's already in cconv.el).
- no need to macroexpand-all and traverse `body' to find its free vars.
- `boundp' won't DTRT in the byte-compiler (you'd want to consult
  byte-compile-bound-variables instead).
- the semantics of capturing dynbound variables as if they were lexical
  vars sounds nasty.
- you copy the value of the free vars, which will only do the right
  thing if those vars aren't mutated.
  
Of course, to make the above work, you still need to teach the
byte-compiler how to handle such code, which shouldn't be too hard.

>> IIRC, the reason why defun doesn't work for it is fundamentally linked
>> to some silly technicality, but I justify it for myself by the fact that
>> all the "defun within a non-empty context" I've seen were bogus, so I'm
>> not strongly motivated to handle it right.
> Even if it's not particularly common, being consistent with Common Lisp and
> having fewer special cases are good things.  Some people use constructs like
> this to create module-private variables (which is a bad idea, but that
> doesn't stop people doing it.)

The right way to fix it is to make defun (and defmacro) a macro, and
that would be a good thing it its own right.  When I tried that the only
problem I bumped into is that we want to dump "#$" in the .elc file to
refer to load-file-name for on-demand-loading of docstrings, so as soon
as someone comes up with a clean/neat way to get `pr in1' to output
a "#$", we'll turn defun and defmacro into macros and this issue
will disappear.

>>> - Disassembling a closure reports closed-over variables as constants;
>>> they're not.
>> They are.
> Err, yes.  They are.

Thank you ;-)

>>> - Do we really use a whole cons cell for each closed-over variable, even in
>>> compiled code?
>> Yes, tho only for variables which are mutated (yup, `setq' is costly).
> Couldn't we create a box type, which would would be like a cons except that
> it'd hold just one value?  (The #<foo> read-syntax is available.)

What for?  We currently use the lower 3 bits for tag, so objects have to
be aligned on a multiple of 8, so on 32bit systems the smallest objects
we can handle use up 64bits; and since in such systems cons cells use up
65bits, they're pretty close to optimal.  Also cons cells have their tag
in the immediate 3bit tag which is good for efficiency, but if we want
to add another small object we wouldn't be able to use one of those 3bit
tag values for it because they're all used already, so we'd have to
treat it as a special kind of vector or misc type but all of those
objects already need more than 64bits... nah, cons cells work just fine.

OTOH we do need to reduce the number of cases where we have to convert
variables to cons-cells, and that mostly means "use fewer closures".
This is actually "easy" to do right now: change the (byte-code/compiler)
implementation of condition-case, unwind-protect, and catch so they
don't require closures, since they're the main source of closures.


        Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]