emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

lexical mumblings


From: Miles Bader
Subject: lexical mumblings
Date: 19 Oct 2001 11:40:16 +0900

Lexical binding for emacs is a perennially popular topic (I first
remember talking about it [with Jamie Zawinski] in 1992), but as Stefan
Monnier pointed out recently, no one ever actually seems to do anything
about it, despite its obvious benefits.  I'd like to talk about how to
actually do it.

One thing that makes me hopeful about the practicality of this is
Stefan's idea that references to undeclared variables should look up the
stack of currently active function invocations and use the value of any
`lexical' binding of that variable it finds, for compatibility with
dynamic binding (there are additional wrinkles, but presumably they can
be worked out).  Perhaps this extra amount of backward compatibility is
the tweak that could make lexical binding practical for emacs?


So, as far as implementation:

It seems like you basically want to attach a `variable-frame' to each
function invocation, which is a vector of variables, and have operations
to get and set elements of that vector efficiently.  You want the
variable-frame to normally be allocated on a stack, to avoid GC
overhead, but you also have to make sure that lambda-expressions that
inherit the lexical context can see it, and be able to deal with these
lambda-expressions escaping (as closures).

For this reason it seems desirable to actually have _two_ variable-
frames per function invocation -- one stack-allocated one which holds
all variables not captured by lambda expressions, and one heap allocated
one that holds variables that _are_ (or may be) captured.  Of course,
the byte-compiler would know that lambda expressions only used by
`well-behaved' functions like mapcar can't escape, and so can use the
stack-allocated variable-frame.

What follows are some more detailed musings about the implementation:


[1] Basic implementation

Looking at the current emacs byte-code interpreter, it seems like the
existing byte-code stack can function nicely as the stack-allocated
variable-frame.  You can just add two new byte-ops to do indexed get/set
operations into the byte-code stack (a side-benefit of this approach is
that normal expression evaluation code could also use these new byte-ops
to retrieve values from the middle of the stack).

So, imagining that two new byte-ops are added:

   # N is the index from the _bottom_ of the stack.
   stack_ref N         # push stack element N on the stack
   stack_set N         # set the value of stack element N from TOP

Thus

   (let* ((x 7)
          (y (+ x 3)))
     (setq x (* x y))
     x)

would be represented as (disregarding any optimization):

   0    constant  7       # pushed on the stack, is in pos 0 => x
   1    stack_ref 0       # fetch x
   2    constant  3
   3    plus              # this result is in stack pos 1 => y
   4    stack_ref 0       # fetch x again
   5    stack_ref 1       # fetch y 
   6    mult
   7    stack_set 0       # store x
   8    stack_ref 0       # fetch x


[2] Captured lexical bindings

In the case of a captured lexical binding, e.g.:

  (let ((x 3)
        (y 4))
    (foo (lambda () (+ x y))
         (+ x 1)))

You (1) have to let the lambda-expr find the variables x & y somehow,
and (2) have to allocate them on the heap, since you don't know what
`foo' will do with the lambda-expr.

To do (2), it seems easiest to just allocate a secondary variable-frame
as a normal lisp vector; a reference to this vector could be stored into
the stack a normal lexical variable for access.

To do (1), the way that I can think of (but which may not be the best)
is to simply heap allocate a `closure object' to represent the lambda in
its lexical environment; the closure would contain two values, a reference
to the heap vframe, and a reference to the code (which would presumably
be a byte-code object that takes the vframe reference as an implicit
first argument or something, which the byte-code for the lambda-expr
will see as being pre-pushed on the stack.

I'm going to imagine some additional new byte-ops, but these are merely
more concise version of common code sequences, and aren't strictly
necessary:

   stack_vec_ref N, M   = stack_ref N, constant M, aref
   stack_vec_set N, M   = stack_ref N, constant M, aref
   vector N             = constant vector, ..., call N
   make_closure N       = constant make-closure, stack_ref N, swap, call 2

E.g., the above might become:

   0    constant  3        # x is slot 0 in the vector, this is its init val
   1    constant  4        # y is slot 1 in the vector, this is its init val
   2    vector    2        # make the heap vframe; the return value is
                           # in stack-position 0, which will be our
                           # local reference to the heap vframe
   3    constant  foo      # we're going to call foo
   4    constant  <lambda () (+ x y)>  # code that references the heap vframe
   5    make_closure  0    # close the lambda over the heap vframe
   6    stack_vec_ref 0, 0 # fetch x
   7    constant  1
   8    plus               # add 1 to x (this is the 2nd arg to foo)
   9    call      2        # call foo with the closure and x+1

The code for <lambda () (+ x y)> assumes that the heap vframe is already
in position 0 on its byte-code stack, and looks like:

   1    stack_vec_ref   0, 0
   2    stack_vec_ref   0, 1
   3    plus
   4    return


[3] Function arguments

Currently, Fbyte_code is called with the function arguments already
(dynamically) bound by funcall_lambda.  To treat function arguments
consistently with other variables, they should be bound lexically too.

This is very easy to do by simply having funcall_lambda pass the args
vector directly to Fbyte_code (instead of binding them) and having
Fbyte_code push them on the byte-stack (of course since the args vector
is not a lisp object, probably an intermediate C function would be
introduced, for which Fbyte_code would then become a simple wrapper to
allow calling from lisp code).

To make this backward-compatible, funcall_lambda should only use the
new-behavior if the byte-code object is expecting it; perhaps the
byte-code object's arg-list (which is normally used by funcall_lambda to
know what vars to bind) could have a special value that means `pass args
directly instead of binding'.

In cases where an argument _should_ be dynamically bound (e.g., because
it's declared with `defvar' somewhere), the byte code can explicitly
bind it.


[4] Optimizing downward-only closures

Obviously all the above heap allocation is wasteful if the function
being called with a closure argument is just `mapcar' or something.  The
byte-compiler can easily recognize simple cases, but how should the
run-time implementation work?

My thoughts are that:

  (1) It should just allocate such `downward-closed-over' variables on
      the byte-stack, just like ordinary variable, somehow providing a
      pointer to the actual byte-stack object in the closure.

  (2) The closure object itself (which is nominally a lisp object)
      should also allocated on the C stack, with alloca, by a special
      `make_downward_closure' byte-op.

One issue with (1) is:  In what form should the caller's byte-stack be
represented when passed to the special closure?  I think a reasonable
method is to slightly change the way the body of a byte-stack is
allocated (via alloca) in Fbyte_code -- instead of simply alloca'ing a
simple C array for the byte-stack, alloca a Lisp_Vector object with the
same number of elements.  The normal byte-code stack-manipulation is
done via the top/bottom pointers in the byte_stack object, and these can
completely ignore the Lisp_Vector header (so the existing code really
wouldn't have to be changed).  Only the special `make_downward_closure'
byte-op would pay attention to the fact that the body of the byte-stack
is actually now also a stack-allocated lisp vector.

This way, downward-closures appear, to lisp, and to called functions, to
be exactly the same as an ordinary heap allocated closure, but they
don't use any heap space.

An example would be:

   (let ((x 3)
         (list ...))
     (mapcar (lambda (y) (+ x y)) list))

becomes:

   1    constant  3
   2    constant  ...
   3    constant  mapcar
   4    constant  <lambda (y) (+ x y)>
   5    make_downward_closure
   6    stack_ref 1
   7    call      2

And the function <lambda (y) (+ x y)> is:

   1    stack_vec_ref 0, 0
   2    plus
   3    return

[Note that the explicit argument y is expected to be already on the
stack, in stack slot 0 right after the variable-frame reference.]

I don't think there are any GC issues as long as the downward-closure
doesn't escape, and the byte-compiler can be very conservative about
this, to be on the safe side.

Closures would be callable by funcall_lambda exactly like other
functions, except that it would have its variable-frame pointer added as
an implicit first argument.


[5] Backward compatibility

I presume one big reason why no one has pushed for lexical binding in
elisp is that it's likely to break old programs.

Ordinarily, the compiler would assume that any variable declared with
`defvar' should be dynamically bound, and any other variable is safe to
lexically bind.

For code which we have control over, it's possible to tell the compiler
that this is OK, via some special signal [e.g., (require 'lexical-let)].
Then it needn't worry at all about backward-compatibility.

However for code that we _don't_ have control over, Stefan's idea is to
simply have any reference to an unknown variable be turned into a search
up the call stack looking for appropriately named lexical bindings, as
well as dynamic bindings.  This needn't be as efficient as an ordinary
variable lookup, I think, since affected code should be fairly rare.

There are several problems that come to mind:

  (1) If there are both dynamic and lexical binding extant, which should
      be used?  Is it possible to establish where in the call-stack a
      dynamic binding was made, so it could ordered w/ respect to any
      lexical bindings?

      [I don't actually think this will be a problem in practice, as my
      experience is that the sort of names used for variables intended
      for use by a called function tend to be different than those
      intended for local-only usage, so there's not _that_ much chance
      that an initial look up the call-stack for lexical-bindings would
      return a false positive.]

  (2) How can information necessary for this searching be represented in
      way that has minimal impact on the normal case, where it's never
      used.  Ideally, it would not involve any runtime action at all by
      any code except the searcher.

The initial representation I thought about was to have a constant vector
of variable names which mirrors the structure of the variable-frame
slots.  This name vector could then be stored in a well-known location,
such as slot 0 of the stack.

However the obvious problem with that is that even if you ensure that
every variable has a unique stack slot (even when out of scope),
multiple variables in a function can share the same name.

Since the byte-code PC is available in each byte_stack object, it would
be possible to have the compiler generate `location dependent' variable
name information when necessary (obviously you want to only do this for
variables who's name _do_ conflict, since the vast majority do not).

For instance, if variables X, Y, and Z are stored in stack slots 0-3,
but there's also another variable called Y in stack slot 4 during for
the byte-code PC range 6-10, the variable-name vector might look like
this:

   [X Y Z (Y 6 . 10)]

Since references to heap-allocated variable-frames are kept in stack
slots, like anonymous variables, names in them could be represented
recursively, like:

   [X [P Q] Y Z]

Here, stack-slot 1 refers to a heap-allocated variable-frame containing
variables P and Q, and the rest of the variables are on the stack
vframe.


[6] Compiler support

The support necessary to implement lexical-binding in the compiler is
not conceptually difficult, but I don't know the byte-compiler code well
enough to say whether there would be practical difficulties.  [Anyone know?]


[6] Interpreter support

While lexical-binding is in some sense a natural form for compiled code
(since the compiler usually sees both binding-points and referencing-points),
it's not so for interpreted code.  So, what should be done for
interpreted code?

  (1) The problem could simply be ignored (as with maclisp), but that
      will probably cause too many problems for debugging.

  (2) Alternatively, something like the `lexical-let' macro in the `cl'
      package could be used; unfortunately, it seems quite inefficient.

  (3) Stefan mentioned that someone he know had actually implemented
      lexical binding for the interpreter, but I don't know the details
      of that (unfortunately I seem to have deleted that email).


[7] Implementation strategy

  (1) I think the changes to the byte-code runtime I described above are
      quite simple, and 100% backward-compatible, so they could be
      implemented quickly without causing any problems for existing code.

  (2) Then the byte-compiler could be changed to optionally produce
      byte-code that uses this runtime support, and programs could be
      tested and optionally declared that they can be compiled with
      lexical binding.

  (3) Well-known programs with `bad' behavior could also be tested to
      see how well the backward-compatibility hacks work.  If this
      support proves to work well, eventually the byte-compiler can be
      changed to use lexical binding by default.  At this point, the
      great mass of code starts to benefit from the speed benefits that
      lexical binding offers.

      Of course there will should always be a way to say `compile this
      code with dynamic binding', so that code that's _really_ bad, that
      doesn't even work with the backward-compatibility hacks, can be
      accommodated (hopefully there isn't any such code, though).

Most importantly there doesn't seem to be any need for a `flag day'; the
support can be added gradually, and old programs should continue to work.


If you've read this far, thank you for your time; now please tear my
arguments apart!


-Miles
-- 
Yo mama's so fat when she gets on an elevator it HAS to go down.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]