axiom-developer
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Axiom musings...


From: Tim Daly
Subject: Re: Axiom musings...
Date: Mon, 27 Sep 2021 10:35:38 -0400

Greg Wilson asked "How Reliable is Scientific Software?"
https://neverworkintheory.org/2021/09/25/how-reliable-is-scientific-software.html

which is a really interesting read. For example"

 [Hatton1994], is now a quarter of a century old, but its conclusions
are still fresh. The authors fed the same data into nine commercial
geophysical software packages and compared the results; they found
that, "numerical disagreement grows at around the rate of 1% in
average absolute difference per 4000 fines of implemented code, and,
even worse, the nature of the disagreement is nonrandom" (i.e., the
authors of different packages make similar mistakes).


On 9/26/21, Tim Daly <axiomcas@gmail.com> wrote:
> I should note that the lastest board I've just unboxed
> (a PYNQ-Z2) is a Zynq Z-7020 chip from Xilinx (AMD).
>
> What makes it interesting is that it contains 2 hard
> core processors and an FPGA, connected by 9 paths
> for communication. The processors can be run
> independently so there is the possibility of a parallel
> version of some Axiom algorithms (assuming I had
> the time, which I don't).
>
> Previously either the hard (physical) processor was
> separate from the FPGA with minimal communication
> or the soft core processor had to be created in the FPGA
> and was much slower.
>
> Now the two have been combined in a single chip.
> That means that my effort to run a proof checker on
> the FPGA and the algorithm on the CPU just got to
> the point where coordination is much easier.
>
> Now all I have to do is figure out how to program this
> beast.
>
> There is no such thing as a simple job.
>
> Tim
>
>
> On 9/26/21, Tim Daly <axiomcas@gmail.com> wrote:
>> I'm familiar with most of the traditional approaches
>> like Theorema. The bibliography contains most of the
>> more interesting sources. [0]
>>
>> There is a difference between traditional approaches to
>> connecting computer algebra and proofs and my approach.
>>
>> Proving an algorithm, like the GCD, in Axiom is hard.
>> There are many GCDs (e.g. NNI vs POLY) and there
>> are theorems and proofs passed at runtime in the
>> arguments of the newly constructed domains. This
>> involves a lot of dependent type theory and issues of
>> compile time / runtime argument evaluation. The issues
>> that arise are difficult and still being debated in the type
>> theory community.
>>
>> I am putting the definitions, theorems, and proofs (DTP)
>> directly into the category/domain hierarchy. Each category
>> will have the DTP specific to it. That way a commutative
>> domain will inherit a commutative theorem and a
>> non-commutative domain will not.
>>
>> Each domain will have additional DTPs associated with
>> the domain (e.g. NNI vs Integer) as well as any DTPs
>> it inherits from the category hierarchy. Functions in the
>> domain will have associated DTPs.
>>
>> A function to be proven will then inherit all of the relevant
>> DTPs. The proof will be attached to the function and
>> both will be sent to the hardware (proof-carrying code).
>>
>> The proof checker, running on a field programmable
>> gate array (FPGA), will be checked at runtime in
>> parallel with the algorithm running on the CPU
>> (aka "trust down to the metal"). (Note that Intel
>> and AMD have built CPU/FPGA combined chips,
>> currently only available in the cloud.)
>>
>>
>>
>> I am (slowly) making progress on the research.
>>
>> I have the hardware and nearly have the proof
>> checker from LEAN running on my FPGA.
>>
>> I'm in the process of spreading the DTPs from
>> LEAN across the category/domain hierarchy.
>>
>> The current Axiom build extracts all of the functions
>> but does not yet have the DTPs.
>>
>> I have to restructure the system, including the compiler
>> and interpreter to parse and inherit the DTPs. I
>> have some of that code but only some of the code
>> has been pushed to the repository (volume 15) but
>> that is rather trivial, out of date, and incomplete.
>>
>> I'm clearly not smart enough to prove the Risch
>> algorithm and its associated machinery but the needed
>> definitions and theorems will be available to someone
>> who wants to try.
>>
>> [0] https://github.com/daly/PDFS/blob/master/bookvolbib.pdf
>>
>>
>> On 8/19/21, Tim Daly <axiomcas@gmail.com> wrote:
>>> =========================================
>>>
>>> REVIEW (Axiom on WSL2 Windows)
>>>
>>>
>>> So the steps to run Axiom from a Windows desktop
>>>
>>> 1 Windows) install XMing on Windows for X11 server
>>>
>>> http://www.straightrunning.com/XmingNotes/
>>>
>>> 2 WSL2) Install Axiom in WSL2
>>>
>>> sudo apt install axiom
>>>
>>> 3 WSL2) modify /usr/bin/axiom to fix the bug:
>>> (someone changed the axiom startup script.
>>> It won't work on WSL2. I don't know who or
>>> how to get it fixed).
>>>
>>> sudo emacs /usr/bin/axiom
>>>
>>> (split the line into 3 and add quote marks)
>>>
>>> export SPADDEFAULT=/usr/local/axiom/mnt/linux
>>> export AXIOM=/usr/lib/axiom-20170501
>>> export "PATH=/usr/lib/axiom-20170501/bin:$PATH"
>>>
>>> 4 WSL2) create a .axiom.input file to include startup cmds:
>>>
>>> emacs .axiom.input
>>>
>>> )cd "/mnt/c/yourpath"
>>> )sys pwd
>>>
>>> 5 WSL2) create a "myaxiom" command that sets the
>>>     DISPLAY variable and starts axiom
>>>
>>> emacs myaxiom
>>>
>>> #! /bin/bash
>>> export DISPLAY=:0.0
>>> axiom
>>>
>>> 6 WSL2) put it in the /usr/bin directory
>>>
>>> chmod +x myaxiom
>>> sudo cp myaxiom /usr/bin/myaxiom
>>>
>>> 7 WINDOWS) start the X11 server
>>>
>>> (XMing XLaunch Icon on your desktop)
>>>
>>> 8 WINDOWS) run myaxiom from PowerShell
>>> (this should start axiom with graphics available)
>>>
>>> wsl myaxiom
>>>
>>> 8 WINDOWS) make a PowerShell desktop
>>>
>>> https://superuser.com/questions/886951/run-powershell-script-when-you-open-powershell
>>>
>>> Tim
>>>
>>> On 8/13/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>> A great deal of thought is directed toward making the SANE version
>>>> of Axiom as flexible as possible, decoupling mechanism from theory.
>>>>
>>>> An interesting publication by Brian Cantwell Smith [0], "Reflection
>>>> and Semantics in LISP" seems to contain interesting ideas related
>>>> to our goal. Of particular interest is the ability to reason about and
>>>> perform self-referential manipulations. In a dependently-typed
>>>> system it seems interesting to be able "adapt" code to handle
>>>> run-time computed arguments to dependent functions. The abstract:
>>>>
>>>>    "We show how a computational system can be constructed to "reason",
>>>> effectively
>>>>    and consequentially, about its own inferential processes. The
>>>> analysis proceeds in two
>>>>    parts. First, we consider the general question of computational
>>>> semantics, rejecting
>>>>    traditional approaches, and arguing that the declarative and
>>>> procedural aspects of
>>>>    computational symbols (what they stand for, and what behaviour they
>>>> engender) should be
>>>>    analysed independently, in order that they may be coherently
>>>> related. Second, we
>>>>    investigate self-referential behavior in computational processes,
>>>> and show how to embed an
>>>>    effective procedural model of a computational calculus within that
>>>> calculus (a model not
>>>>    unlike a meta-circular interpreter, but connected to the
>>>> fundamental operations of the
>>>>    machine in such a way as to provide, at any point in a computation,
>>>> fully articulated
>>>>    descriptions of the state of that computation, for inspection and
>>>> possible modification). In
>>>>    terms of the theories that result from these investigations, we
>>>> present a general architecture
>>>>    for procedurally reflective processes, able to shift smoothly
>>>> between dealing with a given
>>>>    subject domain, and dealing with their own reasoning processes over
>>>> that domain.
>>>>
>>>>    An instance of the general solution is worked out in the context of
>>>> an applicative
>>>>    language. Specifically, we present three successive dialects of
>>>> LISP: 1-LISP, a distillation of
>>>>    current practice, for comparison purposes; 2-LISP, a dialect
>>>> constructed in terms of our
>>>>    rationalised semantics, in which the concept of evaluation is
>>>> rejected in favour of
>>>>    independent notions of simplification and reference, and in which
>>>> the respective categories
>>>>    of notation, structure, semantics, and behaviour are strictly
>>>> aligned; and 3-LISP, an
>>>>    extension of 2-LISP endowed with reflective powers."
>>>>
>>>> Axiom SANE builds dependent types on the fly. The ability to access
>>>> both the refection
>>>> of the tower of algebra and the reflection of the tower of proofs at
>>>> the time of construction
>>>> makes the construction of a new domain or specific algorithm easier
>>>> and more general.
>>>>
>>>> This is of particular interest because one of the efforts is to build
>>>> "all the way down to the
>>>> metal". If each layer is constructed on top of previous proven layers
>>>> and the new layer
>>>> can "reach below" to lower layers then the tower of layers can be
>>>> built without duplication.
>>>>
>>>> Tim
>>>>
>>>> [0], Smith, Brian Cantwell "Reflection and Semantics in LISP"
>>>> POPL '84: Proceedings of the 11th ACM SIGACT-SIGPLAN
>>>> ymposium on Principles of programming languagesJanuary 1
>>>> 984 Pages 23–35https://doi.org/10.1145/800017.800513
>>>>
>>>> On 6/29/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>> Having spent time playing with hardware it is perfectly clear that
>>>>> future computational mathematics efforts need to adapt to using
>>>>> parallel processing.
>>>>>
>>>>> I've spent a fair bit of time thinking about structuring Axiom to
>>>>> be parallel. Most past efforts have tried to focus on making a
>>>>> particular algorithm parallel, such as a matrix multiply.
>>>>>
>>>>> But I think that it might be more effective to make each domain
>>>>> run in parallel. A computation crosses multiple domains so a
>>>>> particular computation could involve multiple parallel copies.
>>>>>
>>>>> For example, computing the Cylindrical Algebraic Decomposition
>>>>> could recursively decompose the plane. Indeed, any tree-recursive
>>>>> algorithm could be run in parallel "in the large" by creating new
>>>>> running copies of the domain for each sub-problem.
>>>>>
>>>>> So the question becomes, how does one manage this?
>>>>>
>>>>> A similar problem occurs in robotics where one could have multiple
>>>>> wheels, arms, propellers, etc. that need to act independently but
>>>>> in coordination.
>>>>>
>>>>> The robot solution uses ROS2. The three ideas are ROSCORE,
>>>>> TOPICS with publish/subscribe, and SERVICES with request/response.
>>>>> These are communication paths defined between processes.
>>>>>
>>>>> ROS2 has a "roscore" which is basically a phonebook of "topics".
>>>>> Any process can create or look up the current active topics. eq:
>>>>>
>>>>>    rosnode list
>>>>>
>>>>> TOPICS:
>>>>>
>>>>> Any process can PUBLISH a topic (which is basically a typed data
>>>>> structure), e.g the topic /hw with the String data "Hello World". eg:
>>>>>
>>>>>    rostopic pub /hw std_msgs/String "Hello, World"
>>>>>
>>>>> Any process can SUBSCRIBE to a topic, such as /hw, and get a
>>>>> copy of the data.  eg:
>>>>>
>>>>>    rostopic echo /hw   ==> "Hello, World"
>>>>>
>>>>> Publishers talk, subscribers listen.
>>>>>
>>>>>
>>>>> SERVICES:
>>>>>
>>>>> Any process can make a REQUEST of a SERVICE and get a RESPONSE.
>>>>> This is basically a remote function call.
>>>>>
>>>>>
>>>>>
>>>>> Axiom in parallel?
>>>>>
>>>>> So domains could run, each in its own process. It could provide
>>>>> services, one for each function. Any other process could request
>>>>> a computation and get the result as a response. Domains could
>>>>> request services from other domains, either waiting for responses
>>>>> or continuing while the response is being computed.
>>>>>
>>>>> The output could be sent anywhere, to a terminal, to a browser,
>>>>> to a network, or to another process using the publish/subscribe
>>>>> protocol, potentially all at the same time since there can be many
>>>>> subscribers to a topic.
>>>>>
>>>>> Available domains could be dynamically added by announcing
>>>>> themselves as new "topics" and could be dynamically looked-up
>>>>> at runtime.
>>>>>
>>>>> This structure allows function-level / domain-level parallelism.
>>>>> It is very effective in the robot world and I think it might be a
>>>>> good structuring mechanism to allow computational mathematics
>>>>> to take advantage of multiple processors in a disciplined fashion.
>>>>>
>>>>> Axiom has a thousand domains and each could run on its own core.
>>>>>
>>>>> In addition. notice that each domain is independent of the others.
>>>>> So if we want to use BLAS Fortran code, it could just be another
>>>>> service node. In fact, any "foreign function" could transparently
>>>>> cooperate in a distributed Axiom.
>>>>>
>>>>> Another key feature is that proofs can be "by node".
>>>>>
>>>>> Tim
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On 6/5/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>> Axiom is based on first-class dependent types. Deciding when
>>>>>> two types are equivalent may involve computation. See
>>>>>> Christiansen, David Thrane "Checking Dependent Types with
>>>>>> Normalization by Evaluation" (2019)
>>>>>>
>>>>>> This puts an interesting constraint on building types. The
>>>>>> constructed types has to export a function to decide if a
>>>>>> given type is "equivalent" to itself.
>>>>>>
>>>>>> The notion of "equivalence" might involve category ideas
>>>>>> of natural transformation and univalence. Sigh.
>>>>>>
>>>>>> That's an interesting design point.
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>>
>>>>>> On 5/5/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>> It is interesting that programmer's eyes and expectations adapt
>>>>>>> to the tools they use. For instance, I use emacs and expect to
>>>>>>> work directly in files and multiple buffers. When I try to use one
>>>>>>> of the many IDE tools I find they tend to "get in the way". I
>>>>>>> already
>>>>>>> know or can quickly find whatever they try to tell me. If you use an
>>>>>>> IDE you probably find emacs "too sparse" for programming.
>>>>>>>
>>>>>>> Recently I've been working in a sparse programming environment.
>>>>>>> I'm exploring the question of running a proof checker in an FPGA.
>>>>>>> The FPGA development tools are painful at best and not intuitive
>>>>>>> since you SEEM to be programming but you're actually describing
>>>>>>> hardware gates, connections, and timing. This is an environment
>>>>>>> where everything happens all-at-once and all-the-time (like the
>>>>>>> circuits in your computer). It is the "assembly language of
>>>>>>> circuits".
>>>>>>> Naturally, my eyes have adapted to this rather raw level.
>>>>>>>
>>>>>>> That said, I'm normally doing literate programming all the time.
>>>>>>> My typical file is a document which is a mixture of latex and lisp.
>>>>>>> It is something of a shock to return to that world. It is clear why
>>>>>>> people who program in Python find lisp to be a "sea of parens".
>>>>>>> Yet as a lisp programmer, I don't even see the parens, just code.
>>>>>>>
>>>>>>> It takes a few minutes in a literate document to adapt vision to
>>>>>>> see the latex / lisp combination as natural. The latex markup,
>>>>>>> like the lisp parens, eventually just disappears. What remains
>>>>>>> is just lisp and natural language text.
>>>>>>>
>>>>>>> This seems painful at first but eyes quickly adapt. The upside
>>>>>>> is that there is always a "finished" document that describes the
>>>>>>> state of the code. The overhead of writing a paragraph to
>>>>>>> describe a new function or change a paragraph to describe the
>>>>>>> changed function is very small.
>>>>>>>
>>>>>>> Using a Makefile I latex the document to generate a current PDF
>>>>>>> and then I extract, load, and execute the code. This loop catches
>>>>>>> errors in both the latex and the source code. Keeping an open file
>>>>>>> in
>>>>>>> my pdf viewer shows all of the changes in the document after every
>>>>>>> run of make. That way I can edit the book as easily as the code.
>>>>>>>
>>>>>>> Ultimately I find that writing the book while writing the code is
>>>>>>> more productive. I don't have to remember why I wrote something
>>>>>>> since the explanation is already there.
>>>>>>>
>>>>>>> We all have our own way of programming and our own tools.
>>>>>>> But I find literate programming to be a real advance over IDE
>>>>>>> style programming and "raw code" programming.
>>>>>>>
>>>>>>> Tim
>>>>>>>
>>>>>>>
>>>>>>> On 2/27/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>> The systems I use have the interesting property of
>>>>>>>> "Living within the compiler".
>>>>>>>>
>>>>>>>> Lisp, Forth, Emacs, and other systems that present themselves
>>>>>>>> through the Read-Eval-Print-Loop (REPL) allow the
>>>>>>>> ability to deeply interact with the system, shaping it to your
>>>>>>>> need.
>>>>>>>>
>>>>>>>> My current thread of study is software architecture. See
>>>>>>>> https://www.youtube.com/watch?v=W2hagw1VhhI&feature=youtu.be
>>>>>>>> and https://www.georgefairbanks.com/videos/
>>>>>>>>
>>>>>>>> My current thinking on SANE involves the ability to
>>>>>>>> dynamically define categories, representations, and functions
>>>>>>>> along with "composition functions" that permits choosing a
>>>>>>>> combination at the time of use.
>>>>>>>>
>>>>>>>> You might want a domain for handling polynomials. There are
>>>>>>>> a lot of choices, depending on your use case. You might want
>>>>>>>> different representations. For example, you might want dense,
>>>>>>>> sparse, recursive, or "machine compatible fixnums" (e.g. to
>>>>>>>> interface with C code). If these don't exist it ought to be
>>>>>>>> possible
>>>>>>>> to create them. Such "lego-like" building blocks require careful
>>>>>>>> thought about creating "fully factored" objects.
>>>>>>>>
>>>>>>>> Given that goal, the traditional barrier of "compiler" vs
>>>>>>>> "interpreter"
>>>>>>>> does not seem useful. It is better to "live within the compiler"
>>>>>>>> which
>>>>>>>> gives the ability to define new things "on the fly".
>>>>>>>>
>>>>>>>> Of course, the SANE compiler is going to want an associated
>>>>>>>> proof of the functions you create along with the other parts
>>>>>>>> such as its category hierarchy and representation properties.
>>>>>>>>
>>>>>>>> There is no such thing as a simple job. :-)
>>>>>>>>
>>>>>>>> Tim
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2/18/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>>> The Axiom SANE compiler / interpreter has a few design points.
>>>>>>>>>
>>>>>>>>> 1) It needs to mix interpreted and compiled code in the same
>>>>>>>>> function.
>>>>>>>>> SANE allows dynamic construction of code as well as dynamic type
>>>>>>>>> construction at runtime. Both of these can occur in a runtime
>>>>>>>>> object.
>>>>>>>>> So there is potentially a mixture of interpreted and compiled
>>>>>>>>> code.
>>>>>>>>>
>>>>>>>>> 2) It needs to perform type resolution at compile time without
>>>>>>>>> overhead
>>>>>>>>> where possible. Since this is not always possible there needs to
>>>>>>>>> be
>>>>>>>>> a "prefix thunk" that will perform the resolution. Trivially, for
>>>>>>>>> example,
>>>>>>>>> if we have a + function we need to type-resolve the arguments.
>>>>>>>>>
>>>>>>>>> However, if we can prove at compile time that the types are both
>>>>>>>>> bounded-NNI and the result is bounded-NNI (i.e. fixnum in lisp)
>>>>>>>>> then we can inline a call to + at runtime. If not, we might have
>>>>>>>>> + applied to NNI and POLY(FLOAT), which requires a thunk to
>>>>>>>>> resolve types. The thunk could even "specialize and compile"
>>>>>>>>> the code before executing it.
>>>>>>>>>
>>>>>>>>> It turns out that the Forth implementation of
>>>>>>>>> "threaded-interpreted"
>>>>>>>>> languages model provides an efficient and effective way to do
>>>>>>>>> this.[0]
>>>>>>>>> Type resolution can be "inserted" in intermediate thunks.
>>>>>>>>> The model also supports dynamic overloading and tail recursion.
>>>>>>>>>
>>>>>>>>> Combining high-level CLOS code with low-level threading gives an
>>>>>>>>> easy to understand and robust design.
>>>>>>>>>
>>>>>>>>> Tim
>>>>>>>>>
>>>>>>>>> [0] Loeliger, R.G. "Threaded Interpretive Languages" (1981)
>>>>>>>>> ISBN 0-07-038360-X
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 2/5/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>>>> I've worked hard to make Axiom depend on almost no other
>>>>>>>>>> tools so that it would not get caught by "code rot" of libraries.
>>>>>>>>>>
>>>>>>>>>> However, I'm also trying to make the new SANE version much
>>>>>>>>>> easier to understand and debug.To that end I've been
>>>>>>>>>> experimenting
>>>>>>>>>> with some ideas.
>>>>>>>>>>
>>>>>>>>>> It should be possible to view source code, of course. But the
>>>>>>>>>> source
>>>>>>>>>> code is not the only, nor possibly the best, representation of
>>>>>>>>>> the
>>>>>>>>>> ideas.
>>>>>>>>>> In particular, source code gets compiled into data structures. In
>>>>>>>>>> Axiom
>>>>>>>>>> these data structures really are a graph of related structures.
>>>>>>>>>>
>>>>>>>>>> For example, looking at the gcd function from NNI, there is the
>>>>>>>>>> representation of the gcd function itself. But there is also a
>>>>>>>>>> structure
>>>>>>>>>> that is the REP (and, in the new system, is separate from the
>>>>>>>>>> domain).
>>>>>>>>>>
>>>>>>>>>> Further, there are associated specification and proof structures.
>>>>>>>>>> Even
>>>>>>>>>> further, the domain inherits the category structures, and from
>>>>>>>>>> those
>>>>>>>>>> it
>>>>>>>>>> inherits logical axioms and definitions through the proof
>>>>>>>>>> structure.
>>>>>>>>>>
>>>>>>>>>> Clearly the gcd function is a node in a much larger graph
>>>>>>>>>> structure.
>>>>>>>>>>
>>>>>>>>>> When trying to decide why code won't compile it would be useful
>>>>>>>>>> to
>>>>>>>>>> be able to see and walk these structures. I've thought about
>>>>>>>>>> using
>>>>>>>>>> the
>>>>>>>>>> browser but browsers are too weak. Either everything has to be
>>>>>>>>>> "in
>>>>>>>>>> a
>>>>>>>>>> single tab to show the graph" or "the nodes of the graph are in
>>>>>>>>>> different
>>>>>>>>>> tabs". Plus, constructing dynamic graphs that change as the
>>>>>>>>>> software
>>>>>>>>>> changes (e.g. by loading a new spad file or creating a new
>>>>>>>>>> function)
>>>>>>>>>> represents the huge problem of keeping the browser "in sync with
>>>>>>>>>> the
>>>>>>>>>> Axiom workspace". So something more dynamic and embedded is
>>>>>>>>>> needed.
>>>>>>>>>>
>>>>>>>>>> Axiom source gets compiled into CLOS data structures. Each of
>>>>>>>>>> these
>>>>>>>>>> new SANE structures has an associated surface representation, so
>>>>>>>>>> they
>>>>>>>>>> can be presented in user-friendly form.
>>>>>>>>>>
>>>>>>>>>> Also, since Axiom is literate software, it should be possible to
>>>>>>>>>> look
>>>>>>>>>> at
>>>>>>>>>> the code in its literate form with the surrounding explanation.
>>>>>>>>>>
>>>>>>>>>> Essentially we'd like to have the ability to "deep dive" into the
>>>>>>>>>> Axiom
>>>>>>>>>> workspace, not only for debugging, but also for understanding
>>>>>>>>>> what
>>>>>>>>>> functions are used, where they come from, what they inherit, and
>>>>>>>>>> how they are used in a computation.
>>>>>>>>>>
>>>>>>>>>> To that end I'm looking at using McClim, a lisp windowing system.
>>>>>>>>>> Since the McClim windows would be part of the lisp image, they
>>>>>>>>>> have
>>>>>>>>>> access to display (and modify) the Axiom workspace at all times.
>>>>>>>>>>
>>>>>>>>>> The only hesitation is that McClim uses quicklisp and drags in a
>>>>>>>>>> lot
>>>>>>>>>> of other subsystems. It's all lisp, of course.
>>>>>>>>>>
>>>>>>>>>> These ideas aren't new. They were available on Symbolics
>>>>>>>>>> machines,
>>>>>>>>>> a truly productive platform and one I sorely miss.
>>>>>>>>>>
>>>>>>>>>> Tim
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 1/19/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>>>>> Also of interest is the talk
>>>>>>>>>>> "The Unreasonable Effectiveness of Dynamic Typing for Practical
>>>>>>>>>>> Programs"
>>>>>>>>>>> https://vimeo.com/74354480
>>>>>>>>>>> which questions whether static typing really has any benefit.
>>>>>>>>>>>
>>>>>>>>>>> Tim
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 1/19/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>>>>>> Peter Naur wrote an article of interest:
>>>>>>>>>>>> http://pages.cs.wisc.edu/~remzi/Naur.pdf
>>>>>>>>>>>>
>>>>>>>>>>>> In particular, it mirrors my notion that Axiom needs
>>>>>>>>>>>> to embrace literate programming so that the "theory
>>>>>>>>>>>> of the problem" is presented as well as the "theory
>>>>>>>>>>>> of the solution". I quote the introduction:
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> This article is, to my mind, the most accurate account
>>>>>>>>>>>> of what goes on in designing and coding a program.
>>>>>>>>>>>> I refer to it regularly when discussing how much
>>>>>>>>>>>> documentation to create, how to pass along tacit
>>>>>>>>>>>> knowledge, and the value of the XP's metaphor-setting
>>>>>>>>>>>> exercise. It also provides a way to examine a methodolgy's
>>>>>>>>>>>> economic structure.
>>>>>>>>>>>>
>>>>>>>>>>>> In the article, which follows, note that the quality of the
>>>>>>>>>>>> designing programmer's work is related to the quality of
>>>>>>>>>>>> the match between his theory of the problem and his theory
>>>>>>>>>>>> of the solution. Note that the quality of a later programmer's
>>>>>>>>>>>> work is related to the match between his theories and the
>>>>>>>>>>>> previous programmer's theories.
>>>>>>>>>>>>
>>>>>>>>>>>> Using Naur's ideas, the designer's job is not to pass along
>>>>>>>>>>>> "the design" but to pass along "the theories" driving the
>>>>>>>>>>>> design.
>>>>>>>>>>>> The latter goal is more useful and more appropriate. It also
>>>>>>>>>>>> highlights that knowledge of the theory is tacit in the owning,
>>>>>>>>>>>> and
>>>>>>>>>>>> so passing along the thoery requires passing along both
>>>>>>>>>>>> explicit
>>>>>>>>>>>> and tacit knowledge.
>>>>>>>>>>>>
>>>>>>>>>>>> Tim
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]