axiom-developer
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Axiom musings...


From: Tim Daly
Subject: Re: Axiom musings...
Date: Tue, 29 Jun 2021 10:51:06 -0400

Having spent time playing with hardware it is perfectly clear that
future computational mathematics efforts need to adapt to using
parallel processing.

I've spent a fair bit of time thinking about structuring Axiom to
be parallel. Most past efforts have tried to focus on making a
particular algorithm parallel, such as a matrix multiply.

But I think that it might be more effective to make each domain
run in parallel. A computation crosses multiple domains so a
particular computation could involve multiple parallel copies.

For example, computing the Cylindrical Algebraic Decomposition
could recursively decompose the plane. Indeed, any tree-recursive
algorithm could be run in parallel "in the large" by creating new
running copies of the domain for each sub-problem.

So the question becomes, how does one manage this?

A similar problem occurs in robotics where one could have multiple
wheels, arms, propellers, etc. that need to act independently but
in coordination.

The robot solution uses ROS2. The three ideas are ROSCORE,
TOPICS with publish/subscribe, and SERVICES with request/response.
These are communication paths defined between processes.

ROS2 has a "roscore" which is basically a phonebook of "topics".
Any process can create or look up the current active topics. eq:

   rosnode list

TOPICS:

Any process can PUBLISH a topic (which is basically a typed data
structure), e.g the topic /hw with the String data "Hello World". eg:

   rostopic pub /hw std_msgs/String "Hello, World"

Any process can SUBSCRIBE to a topic, such as /hw, and get a
copy of the data.  eg:

   rostopic echo /hw   ==> "Hello, World"

Publishers talk, subscribers listen.


SERVICES:

Any process can make a REQUEST of a SERVICE and get a RESPONSE.
This is basically a remote function call.



Axiom in parallel?

So domains could run, each in its own process. It could provide
services, one for each function. Any other process could request
a computation and get the result as a response. Domains could
request services from other domains, either waiting for responses
or continuing while the response is being computed.

The output could be sent anywhere, to a terminal, to a browser,
to a network, or to another process using the publish/subscribe
protocol, potentially all at the same time since there can be many
subscribers to a topic.

Available domains could be dynamically added by announcing
themselves as new "topics" and could be dynamically looked-up
at runtime.

This structure allows function-level / domain-level parallelism.
It is very effective in the robot world and I think it might be a
good structuring mechanism to allow computational mathematics
to take advantage of multiple processors in a disciplined fashion.

Axiom has a thousand domains and each could run on its own core.

In addition. notice that each domain is independent of the others.
So if we want to use BLAS Fortran code, it could just be another
service node. In fact, any "foreign function" could transparently
cooperate in a distributed Axiom.

Another key feature is that proofs can be "by node".

Tim




On 6/5/21, Tim Daly <axiomcas@gmail.com> wrote:
> Axiom is based on first-class dependent types. Deciding when
> two types are equivalent may involve computation. See
> Christiansen, David Thrane "Checking Dependent Types with
> Normalization by Evaluation" (2019)
>
> This puts an interesting constraint on building types. The
> constructed types has to export a function to decide if a
> given type is "equivalent" to itself.
>
> The notion of "equivalence" might involve category ideas
> of natural transformation and univalence. Sigh.
>
> That's an interesting design point.
>
> Tim
>
>
> On 5/5/21, Tim Daly <axiomcas@gmail.com> wrote:
>> It is interesting that programmer's eyes and expectations adapt
>> to the tools they use. For instance, I use emacs and expect to
>> work directly in files and multiple buffers. When I try to use one
>> of the many IDE tools I find they tend to "get in the way". I already
>> know or can quickly find whatever they try to tell me. If you use an
>> IDE you probably find emacs "too sparse" for programming.
>>
>> Recently I've been working in a sparse programming environment.
>> I'm exploring the question of running a proof checker in an FPGA.
>> The FPGA development tools are painful at best and not intuitive
>> since you SEEM to be programming but you're actually describing
>> hardware gates, connections, and timing. This is an environment
>> where everything happens all-at-once and all-the-time (like the
>> circuits in your computer). It is the "assembly language of circuits".
>> Naturally, my eyes have adapted to this rather raw level.
>>
>> That said, I'm normally doing literate programming all the time.
>> My typical file is a document which is a mixture of latex and lisp.
>> It is something of a shock to return to that world. It is clear why
>> people who program in Python find lisp to be a "sea of parens".
>> Yet as a lisp programmer, I don't even see the parens, just code.
>>
>> It takes a few minutes in a literate document to adapt vision to
>> see the latex / lisp combination as natural. The latex markup,
>> like the lisp parens, eventually just disappears. What remains
>> is just lisp and natural language text.
>>
>> This seems painful at first but eyes quickly adapt. The upside
>> is that there is always a "finished" document that describes the
>> state of the code. The overhead of writing a paragraph to
>> describe a new function or change a paragraph to describe the
>> changed function is very small.
>>
>> Using a Makefile I latex the document to generate a current PDF
>> and then I extract, load, and execute the code. This loop catches
>> errors in both the latex and the source code. Keeping an open file in
>> my pdf viewer shows all of the changes in the document after every
>> run of make. That way I can edit the book as easily as the code.
>>
>> Ultimately I find that writing the book while writing the code is
>> more productive. I don't have to remember why I wrote something
>> since the explanation is already there.
>>
>> We all have our own way of programming and our own tools.
>> But I find literate programming to be a real advance over IDE
>> style programming and "raw code" programming.
>>
>> Tim
>>
>>
>> On 2/27/21, Tim Daly <axiomcas@gmail.com> wrote:
>>> The systems I use have the interesting property of
>>> "Living within the compiler".
>>>
>>> Lisp, Forth, Emacs, and other systems that present themselves
>>> through the Read-Eval-Print-Loop (REPL) allow the
>>> ability to deeply interact with the system, shaping it to your need.
>>>
>>> My current thread of study is software architecture. See
>>> https://www.youtube.com/watch?v=W2hagw1VhhI&feature=youtu.be
>>> and https://www.georgefairbanks.com/videos/
>>>
>>> My current thinking on SANE involves the ability to
>>> dynamically define categories, representations, and functions
>>> along with "composition functions" that permits choosing a
>>> combination at the time of use.
>>>
>>> You might want a domain for handling polynomials. There are
>>> a lot of choices, depending on your use case. You might want
>>> different representations. For example, you might want dense,
>>> sparse, recursive, or "machine compatible fixnums" (e.g. to
>>> interface with C code). If these don't exist it ought to be possible
>>> to create them. Such "lego-like" building blocks require careful
>>> thought about creating "fully factored" objects.
>>>
>>> Given that goal, the traditional barrier of "compiler" vs "interpreter"
>>> does not seem useful. It is better to "live within the compiler" which
>>> gives the ability to define new things "on the fly".
>>>
>>> Of course, the SANE compiler is going to want an associated
>>> proof of the functions you create along with the other parts
>>> such as its category hierarchy and representation properties.
>>>
>>> There is no such thing as a simple job. :-)
>>>
>>> Tim
>>>
>>>
>>> On 2/18/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>> The Axiom SANE compiler / interpreter has a few design points.
>>>>
>>>> 1) It needs to mix interpreted and compiled code in the same function.
>>>> SANE allows dynamic construction of code as well as dynamic type
>>>> construction at runtime. Both of these can occur in a runtime object.
>>>> So there is potentially a mixture of interpreted and compiled code.
>>>>
>>>> 2) It needs to perform type resolution at compile time without overhead
>>>> where possible. Since this is not always possible there needs to be
>>>> a "prefix thunk" that will perform the resolution. Trivially, for
>>>> example,
>>>> if we have a + function we need to type-resolve the arguments.
>>>>
>>>> However, if we can prove at compile time that the types are both
>>>> bounded-NNI and the result is bounded-NNI (i.e. fixnum in lisp)
>>>> then we can inline a call to + at runtime. If not, we might have
>>>> + applied to NNI and POLY(FLOAT), which requires a thunk to
>>>> resolve types. The thunk could even "specialize and compile"
>>>> the code before executing it.
>>>>
>>>> It turns out that the Forth implementation of "threaded-interpreted"
>>>> languages model provides an efficient and effective way to do this.[0]
>>>> Type resolution can be "inserted" in intermediate thunks.
>>>> The model also supports dynamic overloading and tail recursion.
>>>>
>>>> Combining high-level CLOS code with low-level threading gives an
>>>> easy to understand and robust design.
>>>>
>>>> Tim
>>>>
>>>> [0] Loeliger, R.G. "Threaded Interpretive Languages" (1981)
>>>> ISBN 0-07-038360-X
>>>>
>>>>
>>>>
>>>>
>>>> On 2/5/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>> I've worked hard to make Axiom depend on almost no other
>>>>> tools so that it would not get caught by "code rot" of libraries.
>>>>>
>>>>> However, I'm also trying to make the new SANE version much
>>>>> easier to understand and debug.To that end I've been experimenting
>>>>> with some ideas.
>>>>>
>>>>> It should be possible to view source code, of course. But the source
>>>>> code is not the only, nor possibly the best, representation of the
>>>>> ideas.
>>>>> In particular, source code gets compiled into data structures. In
>>>>> Axiom
>>>>> these data structures really are a graph of related structures.
>>>>>
>>>>> For example, looking at the gcd function from NNI, there is the
>>>>> representation of the gcd function itself. But there is also a
>>>>> structure
>>>>> that is the REP (and, in the new system, is separate from the domain).
>>>>>
>>>>> Further, there are associated specification and proof structures. Even
>>>>> further, the domain inherits the category structures, and from those
>>>>> it
>>>>> inherits logical axioms and definitions through the proof structure.
>>>>>
>>>>> Clearly the gcd function is a node in a much larger graph structure.
>>>>>
>>>>> When trying to decide why code won't compile it would be useful to
>>>>> be able to see and walk these structures. I've thought about using the
>>>>> browser but browsers are too weak. Either everything has to be "in a
>>>>> single tab to show the graph" or "the nodes of the graph are in
>>>>> different
>>>>> tabs". Plus, constructing dynamic graphs that change as the software
>>>>> changes (e.g. by loading a new spad file or creating a new function)
>>>>> represents the huge problem of keeping the browser "in sync with the
>>>>> Axiom workspace". So something more dynamic and embedded is needed.
>>>>>
>>>>> Axiom source gets compiled into CLOS data structures. Each of these
>>>>> new SANE structures has an associated surface representation, so they
>>>>> can be presented in user-friendly form.
>>>>>
>>>>> Also, since Axiom is literate software, it should be possible to look
>>>>> at
>>>>> the code in its literate form with the surrounding explanation.
>>>>>
>>>>> Essentially we'd like to have the ability to "deep dive" into the
>>>>> Axiom
>>>>> workspace, not only for debugging, but also for understanding what
>>>>> functions are used, where they come from, what they inherit, and
>>>>> how they are used in a computation.
>>>>>
>>>>> To that end I'm looking at using McClim, a lisp windowing system.
>>>>> Since the McClim windows would be part of the lisp image, they have
>>>>> access to display (and modify) the Axiom workspace at all times.
>>>>>
>>>>> The only hesitation is that McClim uses quicklisp and drags in a lot
>>>>> of other subsystems. It's all lisp, of course.
>>>>>
>>>>> These ideas aren't new. They were available on Symbolics machines,
>>>>> a truly productive platform and one I sorely miss.
>>>>>
>>>>> Tim
>>>>>
>>>>>
>>>>>
>>>>> On 1/19/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>> Also of interest is the talk
>>>>>> "The Unreasonable Effectiveness of Dynamic Typing for Practical
>>>>>> Programs"
>>>>>> https://vimeo.com/74354480
>>>>>> which questions whether static typing really has any benefit.
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>>
>>>>>> On 1/19/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>> Peter Naur wrote an article of interest:
>>>>>>> http://pages.cs.wisc.edu/~remzi/Naur.pdf
>>>>>>>
>>>>>>> In particular, it mirrors my notion that Axiom needs
>>>>>>> to embrace literate programming so that the "theory
>>>>>>> of the problem" is presented as well as the "theory
>>>>>>> of the solution". I quote the introduction:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> This article is, to my mind, the most accurate account
>>>>>>> of what goes on in designing and coding a program.
>>>>>>> I refer to it regularly when discussing how much
>>>>>>> documentation to create, how to pass along tacit
>>>>>>> knowledge, and the value of the XP's metaphor-setting
>>>>>>> exercise. It also provides a way to examine a methodolgy's
>>>>>>> economic structure.
>>>>>>>
>>>>>>> In the article, which follows, note that the quality of the
>>>>>>> designing programmer's work is related to the quality of
>>>>>>> the match between his theory of the problem and his theory
>>>>>>> of the solution. Note that the quality of a later programmer's
>>>>>>> work is related to the match between his theories and the
>>>>>>> previous programmer's theories.
>>>>>>>
>>>>>>> Using Naur's ideas, the designer's job is not to pass along
>>>>>>> "the design" but to pass along "the theories" driving the design.
>>>>>>> The latter goal is more useful and more appropriate. It also
>>>>>>> highlights that knowledge of the theory is tacit in the owning, and
>>>>>>> so passing along the thoery requires passing along both explicit
>>>>>>> and tacit knowledge.
>>>>>>>
>>>>>>> Tim
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]