bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Question


From: Robert Elz
Subject: Re: Question
Date: Sun, 17 Sep 2017 10:16:19 +0700

    Date:        Thu, 14 Sep 2017 20:22:04 -0600
    From:        Bob Proulx <bob@proulx.com>
    Message-ID:  <20170914195843892632457@bob.proulx.com>

  | However this difference in philosophy does not change the technology
  | that creating a compiler for an interpreted language is very hard.
  | Maybe impossible.

No.  Not impossible, not even all that difficult, just in this case,
not all that beneficial either.   (There are lisp compilers around for
example.)

It has been a long time since I looked inside bash, and I remember none
of it, but I maintain another shell (currently anyway) and it most
certainly contains a compiler - it just never emits the compiled code
into any external form, it remains internal, without that it would be
far too slow.   While bash is slow, it isn't that slow (ie: it is fast
enough for its purpose) so I assume it does the same (or similar.)

What is relevant though is that compilers for languages like these
rarely generate machine code that is directly executed, because of the
point made earlier that code can be generated on the fly (the canonical
case is:
        read var
        eval "$var"
) the resulting binary must contain the complete compiler, so it can
handle cases like this - so the usual course of action is to compile into
an intermediate form, then interpret that (rather than directly interpreting
the untouched input format).   This also keeps the whole thing portable,
so no variation (beyond writing portable code in the first place) is needed
to make the whole system run on different architectures.

The reasons that no-one has bothered to do something more than what is now
done for sh, is that the benefits just aren't there - not that it is
impossible, or even very hard.   As was noted, with any normal shell script
(there are exceptions) almost all the time is used by the utilities that
the script runs, not by the script itself, so in many cases, taking the
extra time to produce a compiled external form, then read it in again every
time it is run would not usually lead to any net execution speed increase.

Now I believe that was not the objective here, which was just to make the
code harder for others to read/modify/steal - unfortunately that would not
really work either.  Along with compilers come disassemblers (they are needed
if for no other reason than to help debug the compiler - the shell I maintain
has one built in - it is not usually included in the versions people use,
but can be, just by recompiling it) and for these kinds of languages, they
are usually very good (aside from white space, comments, and ordering of
redirects relative to the rest of the words on the command line, the one I
maintain produces essentially identical output to the script input - it might
occasionally leave out redundant useless syntax that was in the original.)

Now to the philosophical ... I agree with the aims of free software, but
I believe that should be interpreted correctly - that is, if software is
free, it should impose no restrictions at all upon is recipients, which
includes hidden "you should reciprocate and make your software free too"
(which is beyond what GNU actually expects I believe, but many people
seem to - which really means that those people are not really offering free
software, they're just bartering, I'll give you mine, you give me yours ...
that's just commerce.)   But any restrictions on the recipient mean that
the software is not really free, and that includes nonsense like requiring
users to redistribute the sources to anyone who wants it.   That's not freedom,
that's an obligation (serfdom - you have to do my work for me.)

kre




reply via email to

[Prev in Thread] Current Thread [Next in Thread]