[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [patch #5603] Separate command's parsing from their execution

From: John Darrington
Subject: Re: [patch #5603] Separate command's parsing from their execution
Date: Fri, 1 Dec 2006 08:21:56 +0800
User-agent: Mutt/1.5.9i

On Thu, Nov 30, 2006 at 12:08:46PM -0800, Ben Pfaff wrote:
     This is a good start.  In the end, I think we'll want a way to
     serialize and deserialize a command to a machine-readable format.


     I'm not sure I like the name cmd_context.  To me a "context" is
     something dynamic, like a set of circumstances surrounding an
     event, but in fact any given cmd_context is static and fixed at
     compile time.  A really good name escapes me at the moment, but
     something more like cmd_{info,plan,class,form} I might find less


     I don't much like the name "API2".  It's just not descriptive.

Maybe it would be better to reverse the sense of the test, and use
something like API_OBSOLETE.  Hopefully it'll go away eventually.

     Is it valid to destroy a command without running it?  If so, we
     should provide a way to test it.   

I'm not sure that I understand the second sentence. A command can be
destroyed without running it, but that wouldn't normally happen of course.
     I think what you're getting at is that we want two interfaces to
     most commands: one that takes a lexer and runs the command;
     another that takes a specification and runs the command.

The first interface takes a lexer and returns a
specification. Otherwise, that's correct.
     If the
     parser and the executor are well designed, then they can even be
     in completely separate source files; it might even make sense in
     some cases to move the executor out of src/language into
     src/data, etc.

I thought it might be best to create a new directory for the
executors, but yes, I think separate source files and separate
directories are the way to go.
     The code you're proposing divides this at a high level: command.c
     calls a function that parses the command and returns a
     specification, then calls a function to execute the
     specification.  My question is this: is it valuable to push this
     division up to this higher level?  As is, I don't see the
     advantage of doing it that way.
     There is also an objection that calling a function is a fairly
     easy and flexible thing to do, that requires only a single line
     of code, doesn't require any dynamic memory allocation, and so
     on.  But creating a specification as a structure, returning it,
     executing it in a separate function, and then discarding it, is
     more work and harder to read.  Take the N OF CASES command as a
     trivial example.  Before the patch, it looks like this:
     Of course, for more complicated commands the difference will be
     less dramatic.
     This is all predicated on the assumption that the GUI should
     execute commands directly, without going through the lexer and
     syntax parser.  Another alternative would be to make the GUI
     generate syntax and feed it to a lexer.  I think that this is a
     serious alternative.  SPSS can output syntax for commands
     executed through the GUI, which allows users who aren't familiar
     with syntax to construct simple syntax files and re-execute them
     with simple variations.  I think that this is an important and
     useful feature.  If we go this direction, it might be wasted
     effort to divide up command implementations into parsing and

I think it's certainly a good idea to have the GUI generate syntax,
and it would make sense to have a test mode, to check that the syntax
produced parses properly.  

But I'm not happy with the idea, of the GUI producing code, and the
code then being submitted to the lexer before it can be executed.  I
guess my objection comes from :

1.  I consider syntax and gui both to be front-ends to the process of
    getting PSPP to do things.   That is, they operate at the same
    level, and are mutual alternatives.  Having one front-end call
    another front-end seems to break this model.

2.  If and when we implement a client-server model like the Chicago
    Company does,  we'll (like you indicated) need to
    serialize/deserialize commands and send them across a TCP/IP
    connection.  It would not make sense for the server to need a

Your objections about casts, and memory allocation are valid, but this
of course can be alleviated by making an interface which takes care of
the casting, and the use of a memory pool can help to mitigate the run
time cost of heap allocation.  It would also then become easy(er) to
add  serialise/deserialise methods to this interface if/when we need


PGP Public key ID: 1024D/2DE827B3 
fingerprint = 8797 A26D 0854 2EAB 0285  A290 8A67 719C 2DE8 27B3
See or any PGP keyserver for public key.

Attachment: signature.asc
Description: Digital signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]