axiom-developer
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Axiom musings...


From: Tim Daly
Subject: Re: Axiom musings...
Date: Thu, 30 Jul 2020 07:28:59 -0400

This is the video related to the Deep Learning for Symbolic
Mathematics paper.

https://www.youtube.com/watch?v=O_sHHG5_lr8&list=PLoROMvodv4rMWw6rRoeSpkiseTHzWj6vu&index=5

I've spent a fair amount of time running some of the
generated problems through Axiom for fuzz testing.

Sometimes we get exact results, or exact up to a constant.
More interesting is that it uncovers some bugs.
I expect to push out some of the results in August.

Tim


On 7/26/20, William Sit <wsit@ccny.cuny.edu> wrote:
> The question of equality in computation is very different than equality in
> mathematics and it is basically due to data representations in computation
> (and for this question, we are not even  concerned with equality between two
> implementations of the same domain but different representations (types),
> when coercion is needed). As you pointed out, the key question is: "what
> should be the definition of equality?" even within one single domain and one
> data representation.
> Arithmetic in any floating point number system does not satisfy many of the
> usual laws. For example, a times (b divided by a) is NOT b in such a system
> unless the system is FP(2,1,c).
> Here FP(r,p,c) is the floating point system with radix r, precision p, and c
> for chopping (truncation).
>  The associative laws of addition and multiplication do NOT hold in
> FP(r,p,c). Strict inequality (less than) between two FP(r,p,c) numbers can
> be weakened to (less than or equal to) after say adding a third number or
> multiplied by a positive number. Ref: Pat.  H. Sterbenz, Floating Point
> Computation,Sections 1.6, 1.7.
>
> So to prove correctness of implementation for an algorithm, say like taking
> square roots, will be far more difficult than the same for integers. How can
> one be convinced that the computation gives the most accurate square root
> for all inputs in the system FP(r,p,c)? In fact, is there such an algorithm
> other than one based on case considerations for every number in
> FP(r,p,c)---you may as well use a lookup table in that case.
>
> In the case of polynomial equality between two domains, one can always do a
> subtraction in the domain of the target of coercion to verify equality. For
> the specific example you gave, I would agree with Cubical type theory
> (whatever that is!) that they should be considered equal (unless there is a
> problem with computing the LCM of denominators of the coefficients).
>
> William
>
> William Sit
> Professor Emeritus
> Department of Mathematics
> The City College of The City University of New York
> New York, NY 10031
> homepage: wsit.ccny.cuny.edu
>
> ________________________________________
> From: Tim Daly <axiomcas@gmail.com>
> Sent: Saturday, July 25, 2020 5:18 AM
> To: William Sit
> Cc: axiom-dev
> Subject: Re: [EXTERNAL] Re: Axiom musings...
>
> The further I get into this game, the harder it becomes.
>
> For example, equality. In the easiest case Axiom has
> propositional equality. That is, the equality function is
> defined by the type, e.g. for the type Integer, 2 = 2.
>
> However, there is also judgmental equality, which
> would apply at the type level. If floating point numbers
> match the usual IEEE definition and UNUMs match
> Gustafson's Type I defintion, is there an equality
> between the types? Does FLOAT = UNUM?
> https://urldefense.proofpoint.com/v2/url?u=https-3A__en.wikipedia.org_wiki_Unum-5F-28number-5Fformat-29&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=pGhsxwcTvR8Ap4fl9FnvlW2_HcwzcFuj51GHaBlmYIU&m=Y2MBHvioPEtIrgddTGyKqAMzQ_VeElXxUnvWAYX19HU&s=tIo8U5fkmzZuvhbq0yoSyRwBb-h2udDjfQfIkCK6qCY&e=
>
> Axiom's coerce provides a (poorly) defined kind of
> judgmental equality but its not actually defined as an
> equality at the type level. You can coerce from
> POLY(FRAC(INT)) to FRAC(POLY(INT)) but are they
> equal? Cubical type theory would seem to say yes.
>
> And don't get me started on the Univalent axiom.
>
> There are lots of interesting equality questions. If there
> is a proof of a program and you can regenerate the
> program from the proof, are the proof and program equal?
>
> Presumably the chap behind the SANE effort will be
> able to handle both kinds of equality (as well as giving
> universes for types).
>
> But I suspect he's not that clever.
>
>
>
> On 7/21/20, Tim Daly <axiomcas@gmail.com> wrote:
>>> Yes, I do remember we worked on the "symbolic integer"
>>> (or "symbolic polynomial", etc.) domains.
>>
>> Sadly only the idea survives.
>>
>> I spent almost all of my time at that time trying to get a
>> version of Axiom to bootstrap. Starting Axiom, at the time,
>> required an already running version of Axiom (the NAG
>> version) and NAG would not let me distribute their code.
>>
>> Building a self-booting version involved (among a lot of
>> other things) breaking circular definitions in the category
>> and domain hierarchy. That took several months and all
>> of my time (much to Baumslag's annoyance).
>>
>> That effort also involved re-writing code from Norman's
>> Lisp back to Common Lisp (much to Norman's annoyance)
>>
>> As a result, the free version of Axiom wasn't released until
>> about 2003 even though I got a copy in 2001. As part of my
>> employment agreement with CCNY they wouldn't make any
>> claim on Axiom and I would only work on it in my free time
>> unless Gilbert said otherwise. My work time was spent as
>> the Magnus lead developer and open sourcing it. Gilbert
>> eventually wanted Axiom and asked me to do it for work
>> also. That's what led to the grant proposal.
>>
>> My thoughts on the subject of symbolic integers were, as a
>> consequence, only "paper experiments". I have several
>> thousand technical papers stacked in my office and library.
>> The CCNY notes are in there somewhere. Unfortunately,
>> every time I pick up a pile to search I find other things I
>> "need to know now" and get lost in that subject. It is like
>> trying to get a glass of water when the three gorges dam
>> burst.
>>
>> It seems feasible to create a SYMINT Symbolic Integer
>> domain that used symbols rather than numbers for the
>> arithmetic, creating a ring that could be used by POLY.
>> Using SYMINT in SYMFRAC Symboiic Fraction would
>> provide a field that could be used by POLY.
>>
>> The idea is still worth the effort but, as usual, I am deep
>> into rebuilding Axiom from the ground up. The issues I'm
>> struggling with now make the bootstrap effort look like a
>> weekend hack. Bootstrapping took about several months.
>> The current effort has taken 6 years so far. There is SO
>> much to know and I am a slow learner.
>>
>> This is where I wish I had graduate students :-)
>>
>> Tim
>>
>>
>> On 7/21/20, William Sit <wsit@ccny.cuny.edu> wrote:
>>> Dear Tim:
>>>
>>> You have expanded on the issues I raised. While you are right that a
>>> computational algorithm can be proved if the specifications are
>>> completely
>>> and precisely coded for the proof systems like Coq. The catch is not the
>>> precisely part but the completely part.
>>>
>>> Yes, I do remember we worked on the "symbolic integer" (or "symbolic
>>> polynomial", etc.) domains. I think I might have actually some notes and
>>> ideas and perhaps code, but it would take me more searching than I can do
>>> now from obsoleted and perhaps non-functional computers. A few days ago,
>>> I
>>> was trying to recover tex files from a Raspberry Pi that I used while in
>>> a
>>> hotel in China (2015), but nothing (meaning ftp and other transfer
>>> methods
>>> or even mailing programs) works because of security. I have also
>>> forgotten
>>> all my knowledge on Linux.
>>>
>>> Too bad we did not continue that effort. Does such a domain (in any
>>> computer
>>> algebra system) exist these days? After all, nearly three decades have
>>> passed.
>>>
>>> William
>>>
>>> William Sit
>>> Professor Emeritus
>>> Department of Mathematics
>>> The City College of The City University of New York
>>> New York, NY 10031
>>> homepage: wsit.ccny.cuny.edu
>>>
>>> ________________________________________
>>> From: Tim Daly <axiomcas@gmail.com>
>>> Sent: Monday, July 20, 2020 4:44 PM
>>> To: William Sit
>>> Cc: axiom-dev
>>> Subject: Re: [EXTERNAL] Re: Axiom musings...
>>>
>>>> So there are two kinds of algorithms, one that is purely mathematical
>>>> and one that is computational, the latter including a particular (class
>>>> of)
>>>> data representation(s) (perhaps even the computer language and
>>>> system of the implementation). It is proofs for the latter type of
>>>> algorithms that is lacking.
>>>
>>> There are ,as you point out, several kinds of proofs to consider.
>>>
>>> One is the proof of the algorithm. An example is Buchberger's
>>> Groebner Basis algorithm which was proven in Coq:
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ricam.oeaw.ac.at_specsem_srs_groeb_download_coq-2Dlinz.pdf&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=pGhsxwcTvR8Ap4fl9FnvlW2_HcwzcFuj51GHaBlmYIU&m=qJcX0eywVfbeyLAuiHayc0VdCrprfGa-v65dRgMKAuE&s=48_B3YVCzXBpYUWOgsuSY0mMrrZd9SpQzWVZki-c6d4&e=
>>>
>>> The Coq proof establishes that the formal algorithm is correct.
>>>
>>> Even in proof one runs into limits of what can be proved. For example,
>>> if the convergence is non-uniform one can, at best, do a proof that
>>> assumes bounds on the non-uniform behavior. So this isn't strictly
>>> a computer algorithm issue.
>>>
>>>> Since data representations (like REP in Axiom) are built recursively,
>>>> a computational algorithm (in the sense above) for Groebner basis
>>>> may have to be designed to take care of just a few of the ways
>>>> integers can be represented. Axiom is built with that in mind (that's
>>>> where type theory comes in), but I bet no one SPECIFIES their
>>>> computational algorithms with the limitations of data representation
>>>> in mind, much less proves the algorithm anew for each new
>>>> representation.
>>>
>>> If you remember, while we were both at CCNY, I worked on a
>>> grant project to construct a "symbolic integer" domain so that
>>> computations could occur over non-numeric integers. The
>>> symbolic form did not have a numeric limitation. Unfortunatly
>>> the current Axiom has no way to support such a domain.
>>>
>>> I'm glad you brought this up. I will have to give some thought
>>> to representing and computing with symbolic integers again.
>>>
>>>> So if a computation of a Groebner basis halts
>>>> because of an intermediate LCM computation (say of two integer
>>>> coefficients), should we consider the implementation as proven
>>>> correct? What if the overflow condition was not detected and the
>>>> computation continues? Indeed, since there may be different
>>>> implementations of the integer domain, we must be sure that
>>>> every implementation of the LCM algorithm handles overflows
>>>> correctly AND specified in the documentation.
>>>
>>> Establishing that the algorithm is correct (e.g Groebner) is
>>> clearly in the proof side of computational math
>>> .
>>> Establishing that there is an implementation of Groebner is
>>> clearly in the computer algebra side of computational math.
>>>
>>> The game is to unite the two.
>>>
>>> Such a proof becomes a PART of the specification of a program
>>> that implements the algorithm, such as in Axiom.
>>>
>>> The definitions and axioms of the proof have to be put into
>>> the system both at the category and domain levels. They
>>> need to be available at the implementation code.
>>>
>>> On the other hand, there are non-logical 'axioms' of
>>> categories and domains, such as limits to the size of a
>>> floating point number. One could have many FLOAT
>>> domains, such as Gustafson's UNUMs.
>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.johngustafson.net_pdfs_BeatingFloatingPoint.pdf&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=pGhsxwcTvR8Ap4fl9FnvlW2_HcwzcFuj51GHaBlmYIU&m=qJcX0eywVfbeyLAuiHayc0VdCrprfGa-v65dRgMKAuE&s=wrah9xCIC0aDYjDhQgMxrQb_NM6HKkKp_QbW91Vx4Y4&e=
>>>
>>> There are non-logical limits to the implementation, such as
>>> intermediate expression swell that uses up all of memory.
>>> It isn't possible to write a specification that can detect all of
>>> these kinds of failures before they occur. But there are logical
>>> ways to handle such boundaries.
>>>
>>> In logic there are 'product types' which are basically
>>> multi-field records. There are 'sum types' which are disjoint
>>> unions.
>>>
>>> Axiom's approach to reaching limits of computation is to
>>> return a 'sum type' that is either the result or 'failed'. The
>>> 'failed' result needs to be explicitly carried in the proof.
>>> Because 'failed' is a valid result, the specification can be
>>> expanded to include this as a valid result.
>>>
>>> Just because a program can fail there is no reason to say
>>> that it can't be proven, given that 'failed' is a valid result.
>>>
>>> One could even expand the sum types to include information
>>> about the failure so that 'failed' would be a product type that
>>> said why it failed. That allows another domain to be used.
>>> In fact, one could introduce a FAILED domain in Axiom
>>> with a 'why?' function. Local domains could extend FAILED
>>> with their own 'why?' function specific to their possible
>>> failures.
>>>
>>> Axiom has the ability to have muttiple domains that can
>>> overcome limits, e.g. small floats, large floats, unums.
>>> These would allow users to 'retry' a computation with
>>> different assumptions.
>>>
>>> The issue you raise is important. Some algorithms in
>>> Axiom have "hand-waving' specifications that need
>>> to be expanded to properly return 'failed' when that is
>>> expected. I think this is a 'good thing' and a useful
>>> by-product of combining proof and computer algebra.
>>>
>>> Am I closer to understanding your objections?
>>>
>>> Tim
>>>
>>>
>>>
>>> On 7/20/20, William Sit <wsit@ccny.cuny.edu> wrote:
>>>> Hi Tim:
>>>>
>>>> Perhaps I did not make myself clear in the short comment.
>>>> What I wanted to say is that a data representation is not the same as
>>>> the
>>>> abstract mathematical objects because there are finite bounds on the
>>>> representation. Take for example, an algorithm to compute the LCM of two
>>>> integers. The LCM can cause overflow and not be representable. Of
>>>> course,
>>>> you can change the data representation to have "infinite precision", but
>>>> that would still be bounded by actual physical memory of the machine.
>>>> The
>>>> careful programmer of the LCM algorithm would add throws and catches to
>>>> handle the "error",but the implementation will have to add code that is
>>>> not
>>>> considered in the theoretical LCM algorithm (unless the LCM algorithm is
>>>> meant for bounded integers of a fixed data representation and not
>>>> abstract
>>>> integers). So there are two kinds of algorithms, one that is purely
>>>> mathematical and one that is computational, the latter including a
>>>> particular (class of) data representation(s) (perhaps even the computer
>>>> language and system of the implementation). It is proofs for the latter
>>>> type
>>>> of algorithms that is lacking. Since data representations (like REP in
>>>> Axiom) are built recursively, a computational algorithm (in the sense
>>>> above)
>>>> for Groebner basis may have to be designed to take care of just a few of
>>>> the
>>>> ways integers can be represented. Axiom is built with that in mind
>>>> (that's
>>>> where type theory comes in), but I bet no one SPECIFIES their
>>>> computational
>>>> algorithms with the limitations of data representation in mind, much
>>>> less
>>>> proves the algorithm anew for each new representation. So if a
>>>> computation
>>>> of a Groebner basis halts because of an intermediate LCM computation
>>>> (say
>>>> of
>>>> two integer coefficients), should we consider the implementation as
>>>> proven
>>>> correct? What if the overflow condition was not detected and the
>>>> computation
>>>> continues? Indeed, since there may be different implementations of the
>>>> integer domain, we must be sure that every implementation of the LCM
>>>> algorithm handles overflows correctly AND specified in the
>>>> documentation.
>>>>
>>>> I am sure I am just being ignorant to pose these questions, because they
>>>> must have been considered and perhaps solved. In that case, please
>>>> ignore
>>>> them and just tell me so.
>>>>
>>>> William
>>>>
>>>> William Sit
>>>> Professor Emeritus
>>>> Department of Mathematics
>>>> The City College of The City University of New York
>>>> New York, NY 10031
>>>> homepage: wsit.ccny.cuny.edu
>>>>
>>>> ________________________________________
>>>> From: Tim Daly <axiomcas@gmail.com>
>>>> Sent: Sunday, July 19, 2020 5:33 PM
>>>> To: William Sit
>>>> Cc: axiom-dev
>>>> Subject: Re: [EXTERNAL] Re: Axiom musings...
>>>>
>>>> There are several "problems" with proving programs correct that
>>>> I don't quite know how to solve, or even approach. But that's the
>>>> fun of "research", right?
>>>>
>>>> For the data representation question I've been looking at types.
>>>> I took 10 courses at CMU. I am eyebrow deep in type theory.
>>>> I'm looking at category theory and homotopy type theory. So
>>>> far I haven't seen anyone looking at the data problem. Most of
>>>> the focus is on strict code typing.
>>>>
>>>> There is an old MIT course by Abelson and Sussman "Structure
>>>> and Interpretation of Computer Programs" (SICP). They rewrite
>>>> data as programs which, in Lisp, is trivial to do, Dan Friedman
>>>> seems to have some interesting ideas too.
>>>>
>>>> All of Axiom's SANE types are now CLOS objects which gives
>>>> two benefits. First, they can be inherited. But second, they
>>>> are basically Lisp data structures with associated code.
>>>>
>>>> I'm thinking of associating "data axioms" with the representation
>>>> (REP) object of a domain as well as with the functions.
>>>>
>>>> For example, DenavitHartenbergMatrix encodes 4x4 matrices
>>>> used in graphics and robotics. They are 4x4 matrices where
>>>> the upper left 3x3 encodes rotations, the right column encodes
>>>> translations, and the lower row includes scaling, skewing, etc.
>>>>
>>>> (As an aside, DHMATRIX matrices have an associated
>>>> Jacobian which encodes the dynamics in things like robots.
>>>> Since I'm also programming a robot I'm tempted to work on
>>>> extending the domain with related functions... but, as
>>>> Hamming said, new algebra code isn't "the most important
>>>> problem in computational mathematics").
>>>>
>>>> Axioms associated with the REP can assume that they are
>>>> 4x4, that they can be inverted, that they have a "space" of
>>>> rotations, etc. The axioms provide "facts" known to be true
>>>> about the REP. (I also need to think about a "specification"
>>>> for the REP but I'm not there yet).
>>>>
>>>> Since every category and domain is a CLOS data structure
>>>> the DHMATRIX data structure inherits REP axioms from its
>>>> inheritance graph (e.g. SQMATRIX axioms). But DHMATRIX
>>>> adds domain-specific REP axioms (as well as domain-specific
>>>> function axioms). Thus a DHMATRIX rotate function can
>>>> base its proof on the fact that it only affects the upper 3x3
>>>> and lives in a space of rotations, all of which can be assumed
>>>> by the proof.
>>>>
>>>> If I use the SICP "trick" of representing data as code I can
>>>> "expand" the data as part of the program proof.
>>>>
>>>> It is all Omphaloskepsis (navel gazing) at this point though.
>>>> I'm still writing the new SANE compiler (which is wildly
>>>> different from the compiler course I taught).
>>>>
>>>> I did give a talk at Notre Dame but I haven't attempted to
>>>> publish. All of my work shows up in literate programming
>>>> Axiom books on github.
>>>> (https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_daly_PDFS&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=pGhsxwcTvR8Ap4fl9FnvlW2_HcwzcFuj51GHaBlmYIU&m=WOYlKYoZNDGIAC2_SbARFwrWepvVu8EQIcLfvTFz2x8&s=VTyfp86PorJlUsYXQh-5H2rc57ovAik1_HcrqxsygWk&e=
>>>> )
>>>>
>>>> It is all pretty pointless since nobody cares about computer
>>>> algebra, proving math programs correct, or Axiom itself.
>>>> Wolfram is taking up all the oxygen in the discussions.
>>>>
>>>> But I have to say, this research is great fun. It reminds me
>>>> of the Scratchpad days, although I miss the give-and-take
>>>> of the group. It is hard to recreate my role as the dumbest
>>>> guy in the room when I'm stuck here by myself :-)
>>>>
>>>> Hope you and your family are safe and healthy.
>>>>
>>>> Tim
>>>>
>>>> PS. I think we should redefine the "Hamming Distance" as
>>>> the distance between an idea and its implementation.
>>>>
>>>>
>>>>
>>>> On 7/19/20, William Sit <wsit@ccny.cuny.edu> wrote:
>>>>> Hi Tim:
>>>>>
>>>>> Glad to hear from you now and then, promoting and working towards your
>>>>> ideas
>>>>> and ideals.
>>>>>
>>>>>  >>We need proven algorithms.
>>>>>
>>>>> Just one short comment: it is often possible to prove algorithms (that
>>>>> is,
>>>>> providing the theoretical foundation for the algorithm), but it is much
>>>>> harder to prove that an implementation of the algorithm is correct. As
>>>>> you
>>>>> well know, the distinction lies in that implementation involves data
>>>>> representations whereas proofs of algorithms normally ignore them.
>>>>> Introducing (finite) data representations means introducing boundary
>>>>> situations that a programmer implementing an algorithm must deal with.
>>>>> So
>>>>> perhaps what we need to prove should include the correctness of
>>>>> implementations (to the bare metal, as you often say) and we should
>>>>> have
>>>>> a
>>>>> different set of analytic tools that can deal with the correctness (or
>>>>> completeness) of data representations. Of course, these tools must also
>>>>> be
>>>>> proven with the same rigor since behind every program is an algorithm.
>>>>>
>>>>> William
>>>>>
>>>>> William Sit
>>>>> Professor Emeritus
>>>>> Department of Mathematics
>>>>> The City College of The City University of New York
>>>>> New York, NY 10031
>>>>> homepage: wsit.ccny.cuny.edu
>>>>>
>>>>> ________________________________________
>>>>> From: Axiom-developer
>>>>> <axiom-developer-bounces+wyscc=sci.ccny.cuny.edu@nongnu.org> on behalf
>>>>> of
>>>>> Tim Daly <axiomcas@gmail.com>
>>>>> Sent: Saturday, July 18, 2020 6:28 PM
>>>>> To: axiom-dev; Tim Daly
>>>>> Subject: [EXTERNAL] Re: Axiom musings...
>>>>>
>>>>> Richard Hamming gave a great talk. "You and Your Research"
>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3Da1zDuOPkMSw&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=kSXlFiPNCbYVZvoZ62OUVd_40kcVviTxSKF3vNNtm0U&e=
>>>>>
>>>>> His big question is:
>>>>>
>>>>> "What is the most important problem in your field
>>>>> and why aren't you working on it?"
>>>>>
>>>>> To my mind, the most important problem in the field of
>>>>> computational mathematics is grounding computer
>>>>> algebra in proofs.
>>>>>
>>>>> Computer mathematical algorithms that "maybe,
>>>>> possibly, give correct answers sometimes" is a problem.
>>>>> Indeed, for computer algebra, it is the most important
>>>>> problem. We need proven algorithms.
>>>>>
>>>>> New algorithms, better graphics, better documentation,
>>>>> are all "nice to have" but, as Hamming would say,
>>>>> they are not "the most important problem".
>>>>>
>>>>> Tim
>>>>>
>>>>>
>>>>>
>>>>> On 7/2/20, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>> Time for another update.
>>>>>>
>>>>>> The latest Intel processors, available only to data centers
>>>>>> so far, have a built-in FPGA. This allows you to design
>>>>>> your own circuits and have them loaded "on the fly",
>>>>>> running in parallel with the CPU.
>>>>>>
>>>>>> I bought a Lattice ICEstick FPGA development board. For
>>>>>> the first time there are open source tools that support it so
>>>>>> it is a great test bench for ideas and development. It is a
>>>>>> USB drive so it can be easily ported to any PC.
>>>>>> (https://urldefense.proofpoint.com/v2/url?u=https-3A__www.latticesemi.com_products_developmentboardsandkits_icestick&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=QxcJcE1BdIMqDbutQz2HFhAAAymG-QswIjRao_YTwz4&e=
>>>>>> )
>>>>>>
>>>>>> I also bought a large Intel Cyclone FPGA development board.
>>>>>> (https://urldefense.proofpoint.com/v2/url?u=http-3A__www.terasic.com.tw_cgi-2Dbin_page_archive.pl-3FLanguage-3DEnglish-26No-3D836&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=3wW6BueAeyVTQi0xGqoeE7xIA5EREDmvQR4fPw5zAXo&e=
>>>>>> )
>>>>>> which has 2 embedded ARM processors. Unfortunately
>>>>>> the tools (which are freely available) are not open source.
>>>>>> It has sufficient size and power to do anything.
>>>>>>
>>>>>>
>>>>>> I've got 2 threads of work in progress, both of which
>>>>>> involve FPGAs (Field Programmable Gate Arrays).
>>>>>>
>>>>>> Thread 1
>>>>>>
>>>>>> The first thread involves proving programs correct. Once
>>>>>> a proof has been made it is rather easier to check the proof.
>>>>>> If code is shipped with a proof, the proof can be loaded into
>>>>>> an FPGA running a proof-checker which verifies the program
>>>>>> in parallel with running the code on the CPU.
>>>>>>
>>>>>> I am researching the question of writing a proof checker that
>>>>>> runs on an FPGA, thus verifying the code "down to the metal".
>>>>>> The Lean proof checker is the current target.
>>>>>>
>>>>>> The idea is to make "Oracle" algorithms that, because they
>>>>>> are proven correct and verified at runtime, can be trusted
>>>>>> by other mathematical software (e.g. Lean, Coq, Agda)
>>>>>> when used in proofs.
>>>>>>
>>>>>> Thread 2
>>>>>>
>>>>>>
>>>>>> The second thread involves arithmetic. Axiom currently ships
>>>>>> with numeric routines (BLAS and LAPACK, see bookvol10.5).
>>>>>> These routines have a known set of numeric failures such as
>>>>>> cancellation, underflow, and scaling.
>>>>>>
>>>>>> John Gustafson has designed a 'unum' numeric format that can
>>>>>> eliminate many of these errors. (See
>>>>>> Gustafson, John "The End of Error" CRC Press 2015
>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amazon.com_End-2DError-2DComputing-2DChapman-2DComputational_dp_1482239868_ref-3Dsr-5F1-5F1-3Fdchild-3D1-26keywords-3Dgustafson-2Bthe-2Bend-2Bof-2Berror-26qid-3D1593685423-26sr-3D8-2D1&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=cxcqXTqQQjOFj6wRWKcaCMutCt0BYJ0WwJnlo0hYa0A&e=
>>>>>> )
>>>>>>
>>>>>> The research goal is to implement Axiom's floating-point
>>>>>> arithmetic that can be offloaded onto an FPGA implementing
>>>>>> the unum format. Such a system would radically simplify
>>>>>> the implementation of BLAS and LAPACK as most of the
>>>>>> errors can't occur. The impact would be similar to using
>>>>>> multi-precision integer arithmetic, only now its floating-point.
>>>>>>
>>>>>> SANE, the greater goal.
>>>>>>
>>>>>> The Axiom SANE compiler / interpreter can use both of
>>>>>> these tools to implement trusted mathematical software.
>>>>>> It's a long, ambitious research effort but even if only pieces
>>>>>> of it succeed, it changes computational mathematics.
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>> "A person's reach should exceed their grasp,
>>>>>> or what's a computer for?"  (misquoting Robert Browning)
>>>>>>
>>>>>> (https://urldefense.proofpoint.com/v2/url?u=https-3A__www.quotetab.com_quote_by-2Drobert-2Dbrowning_ah-2Dbut-2Da-2Dmans-2Dreach-2Dshould-2Dexceed-2Dhis-2Dgrasp-2Dor-2Dwhats-2Da-2Dheaven-2Dfor&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=ayZkzXC9ekESctdx_OqsfcYl4z14qlYS02TBNmnaHUY&e=
>>>>>> )
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 6/16/20, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>> WHY PROVE AXIOM CORRECT (SANE)?
>>>>>>>
>>>>>>> Historically, Axiom credits CLU, the Cluster language by
>>>>>>> Barbara Liskov, with the essential ideas behind the Spad
>>>>>>> language. Barbara gave a talk (a partial transcript below)
>>>>>>> that gives the rational behind the ``where clause'' used by
>>>>>>> Spad.
>>>>>>>
>>>>>>> She talks about the limits of the compile time capablity.
>>>>>>> In particular, she says:
>>>>>>>
>>>>>>>    To go further, where we would say that T,
>>>>>>>    in addition, has to be an equality relation, that requires
>>>>>>>    much more sophisticated techniques that, even today, are
>>>>>>>    beyond the capabilities of the compiler.
>>>>>>>
>>>>>>> Showing that the ``equal'' function satisfies the equality
>>>>>>> relation is no longer ``beyond the capabilities of the compiler''.
>>>>>>> We have the required formalisms and mechanisms to
>>>>>>> prove properties at compile time.
>>>>>>>
>>>>>>> The SANE effort is essentially trying to push compile
>>>>>>> time checking into proving that, for categories that use
>>>>>>> ``equal'', we prove that the equal function implements
>>>>>>> equality.
>>>>>>>
>>>>>>> I strongly encourage you to watch her video.
>>>>>>>
>>>>>>> Tim
>>>>>>>
>>>>>>> ===========================================
>>>>>>> Barbara Liskov
>>>>>>> May 2012
>>>>>>> MIT CSAIL
>>>>>>> Programming the Turing Machine
>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3DibRar7sWulM&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=mKaSE2deFF_wqq9yriqo-s51oF6c3-ksS2_IZhS1eGY&e=
>>>>>>>
>>>>>>> POLYMORPHISM
>>>>>>>
>>>>>>> We don't just want a set, we want polymorphism or
>>>>>>> generics, as they are called today. We wanted to
>>>>>>> have a generic set which was paramaterized by type
>>>>>>> so you could instantiate it as:
>>>>>>>
>>>>>>> Set = [T:type] create, insert,...
>>>>>>>   % representation for Set object
>>>>>>>   % implementation of Set operations
>>>>>>>   Set
>>>>>>>
>>>>>>> Set[int] s := Set[int]$create()
>>>>>>> Set[int]$insert(s,3)
>>>>>>>
>>>>>>> We wanted a static solution to this problem. The
>>>>>>> problem is, not every type makes sense as a parameter
>>>>>>> to Set of T. For sets, per se, you need an equality
>>>>>>> relation. If it has been a sorted set we would have
>>>>>>> some ordering relation. And a type that didn't have
>>>>>>> one of those things would not have been a legitimate
>>>>>>> parameter. We needed a way of expressing that in a
>>>>>>> compile-time, checkable manner. Otherwise we would
>>>>>>> have had to resort to runtime checking.
>>>>>>>
>>>>>>> Our solution was
>>>>>>>
>>>>>>> Set = [T:  ] create, insert,...
>>>>>>>   T equal: (T,T) (bool)
>>>>>>>
>>>>>>>
>>>>>>> Our solution, what we call the ``where clause''. So we
>>>>>>> added this to the header. The ``where clause'' tells you
>>>>>>> what operations the parameter type has to have.
>>>>>>>
>>>>>>> If you have the ``where'' clause you can do the static
>>>>>>> checking because when you instantiate, when you provide
>>>>>>> an actual type, the compiler can check that the type has
>>>>>>> the operations that are required. And then, when you write
>>>>>>> the implementation of Set the compiler knows it's ok to
>>>>>>> call those operations because you can guarantee they are
>>>>>>> actually there when you get around to running.
>>>>>>>
>>>>>>> Of course, you notice that there's just syntax here; there's
>>>>>>> no semantics.
>>>>>>>
>>>>>>> As I'm sure you all know, compile-time type checking is
>>>>>>> basically a proof technique of a very limited sort and
>>>>>>> this was about as far as we can push what you could get out of the
>>>>>>> static analysis. To go further, where we would say that T,
>>>>>>> in addition, has to be an equality relation, that requires
>>>>>>> much more sophisticated techniques that, even today, are
>>>>>>> beyond the capabilities of the compiler.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 3/24/20, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>> I've spent entirely too much time studing the legal issues
>>>>>>>> of free and open source software. There are copyright,
>>>>>>>> trademark, and intellectual property laws. I have read
>>>>>>>> several books, listened to lectures, and read papers on
>>>>>>>> the subject. I've spoken to lawyers about it. I've even
>>>>>>>> been required, by law, to coerce people I respect.
>>>>>>>> You would think it was all perfectly clear. It isn't.
>>>>>>>>
>>>>>>>> The most entertaining and enlightening lectures were
>>>>>>>> by Robert Lefkowitz at OSCON 2004. His talk is
>>>>>>>> "The Semasiology of Open Source", which sounds
>>>>>>>> horrible but I assure you, this is a real treat.
>>>>>>>>
>>>>>>>> THE THESIS
>>>>>>>>
>>>>>>>> Semasiology, n. The science of meanings or
>>>>>>>> sense development (of words); the explanation
>>>>>>>> of the development and changes of the meanings
>>>>>>>> of words. Source: Webster's Revised Unabridged
>>>>>>>> Dictionary, � 1996, 1998 MICRA, Inc.
>>>>>>>>
>>>>>>>> "Open source doesn't just mean access to the
>>>>>>>> source code." So begins the Open Source Definition.
>>>>>>>> What then, does access to the source code mean?
>>>>>>>> Seen through the lens of an Enterprise user, what
>>>>>>>> does open source mean? When is (or isn't) it
>>>>>>>> significant? And a catalogue of open source
>>>>>>>> related arbitrage opportunities.
>>>>>>>>
>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__origin.conversationsnetwork.org_Robert-2520Lefkowitz-2520-2D-2520The-2520Semasiology-2520of-2520Open-2520Source.mp3&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=IpKqNvLCWxaxdmI9ATBmNX0r3h_3giwDJVTFcnEbusM&e=
>>>>>>>>
>>>>>>>> Computer source code has words and sentence
>>>>>>>> structure like actual prose or even poetry. Writing
>>>>>>>> code for the computer is like writing an essay. It
>>>>>>>> should be written for other people to read,
>>>>>>>> understand and modify. These are some of the
>>>>>>>> thoughts behind literate programming proposed
>>>>>>>> by Donald Knuth. This is also one of the ideas
>>>>>>>> behind Open Source.
>>>>>>>>
>>>>>>>>  THE ANTITHESIS
>>>>>>>>
>>>>>>>> "Open Source" is a phrase like "Object Oriented"
>>>>>>>> - weird at first, but when it became popular, the
>>>>>>>> meaning began to depend on the context of the
>>>>>>>> speaker or listener. "Object Oriented" meant that
>>>>>>>> PERL, C++, Java, Smalltalk, Basic and the newest
>>>>>>>> version of Cobol are all "Object Oriented" - for some
>>>>>>>> specific definition of "Object Oriented". Similar is
>>>>>>>> the case of the phrase "Open Source".
>>>>>>>>
>>>>>>>> In Part I, Lefkowitz talked about the shift of the
>>>>>>>> meaning of "Open Source" away from any reference
>>>>>>>> to the actual "source code," and more towards other
>>>>>>>> phases of the software development life cycle. In
>>>>>>>> Part II, he returns to the consideration of the
>>>>>>>> relationship between "open source" and the actual
>>>>>>>> "source code," and reflects upon both the way
>>>>>>>> forward and the road behind, drawing inspiration
>>>>>>>> from Charlemagne, King Louis XIV, Donald Knuth,
>>>>>>>> and others.
>>>>>>>>
>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__origin.conversationsnetwork.org_ITC.OSCON05-2DRobertLefkowitz-2D2005.08.03.mp3&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=LTgLxuL_diAdUFVj96fbcZJ08IEv_MGf28Vlk0InNQI&e=
>>>>>>>>
>>>>>>>> THE SYNTHESIS
>>>>>>>>
>>>>>>>> In a fascinating synthesis, Robert "r0ml" Lefkowitz
>>>>>>>> polishes up his exposition on the evolving meaning
>>>>>>>> of the term 'open source'. This intellectual joy-ride
>>>>>>>> draws on some of the key ideas in artificial intelligence
>>>>>>>> to probe the role of language, meaning and context
>>>>>>>> in computing and the software development process.
>>>>>>>> Like Wittgenstein's famous thought experiment, the
>>>>>>>> open source 'beetle in a box' can represent different
>>>>>>>> things to different people, bearing interesting fruit for
>>>>>>>> philosophers and software creators alike.
>>>>>>>>
>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__itc.conversationsnetwork.org_audio_download_itconversations-2D1502.mp3&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=Jls8thoIwON-5Jr2Rn1_MXWtrohVFn1Ik4c7l2MFsnk&e=
>>>>>>>>
>>>>>>>>
>>>>>>>> On 3/7/20, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>>> I've pushed the lastest version of Axiom. The plan, followed
>>>>>>>>> so far, is to push once a month on the 7th.
>>>>>>>>>
>>>>>>>>> After some chat room interactions it was brought home
>>>>>>>>> again that the proof world really does not seem to like the
>>>>>>>>> idea of proving programs correct. And, given that it was is
>>>>>>>>> of the main Axiom goals and a point of friction during the fork,
>>>>>>>>> the computer algebra world does not like the idea of proving
>>>>>>>>> programs correct either.
>>>>>>>>>
>>>>>>>>> So the idea of "computational mathematics", which includes
>>>>>>>>> both disciplines (as well as type theory) seems still a long
>>>>>>>>> way off.
>>>>>>>>>
>>>>>>>>> Nevertheless, the primary change in these past and future
>>>>>>>>> updates is focused on merging proof and computer algebra.
>>>>>>>>>
>>>>>>>>> Proof systems are able to split the problem of creating a
>>>>>>>>> proof and the problem of verifying a proof, which is much
>>>>>>>>> cheaper. Ideally the proof checker would run on verified
>>>>>>>>> hardware so the proof is checked "down to the metal".
>>>>>>>>>
>>>>>>>>> I have a background in Field Programmable Gate Arrays
>>>>>>>>> (FPGAs) as I tried to do a startup using them. So now I'm
>>>>>>>>> looking at creating a hardware proof checker using a
>>>>>>>>> dedicated instruction set, one designed to be verifed.
>>>>>>>>> New CPUs used in data centers (not yet available to us
>>>>>>>>> mortals) have built-in FPGAs so it would be possible to
>>>>>>>>> "side-load" a proof of a program to be checked while the
>>>>>>>>> program is run. I have the FPGA and am doing a gate-level
>>>>>>>>> special instruction design for such a proof checker.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 2/7/20, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>>>> As a mathematician, it is difficult to use a system like Axiom,
>>>>>>>>>> mostly because it keeps muttering about Types. If you're not
>>>>>>>>>> familiar with type theory (most mathematicians aren't) then it
>>>>>>>>>> seems pointless and painful.
>>>>>>>>>>
>>>>>>>>>> So Axiom has a steep learning curve.
>>>>>>>>>>
>>>>>>>>>> As a mathematician with an algorithmic approach, it is difficult
>>>>>>>>>> to use a system like Axiom, mostly because you have to find
>>>>>>>>>> or create "domains" or "packages", understand categories
>>>>>>>>>> with their inheritance model, and learn a new language with
>>>>>>>>>> a painful compiler always complaining about types.
>>>>>>>>>>
>>>>>>>>>> So Axiom has a steep learning curve.
>>>>>>>>>>
>>>>>>>>>> The Sane version of Axiom requires knowing the mathematics.
>>>>>>>>>> It also assumes a background in type theory, inductive logic,
>>>>>>>>>> homotopy type theory, ML (meta-language, not machine
>>>>>>>>>> learning (yet)), interactive theorem proving, kernels, tactics,
>>>>>>>>>> and tacticals. Also assumed is knowledge of specification
>>>>>>>>>> languages,
>>>>>>>>>> Hoare triples, proof techniques, soundness, and completeness.
>>>>>>>>>> Oh, and there is a whole new syntax and semantics added to
>>>>>>>>>> specify definitions, axioms, and theorems, not to mention whole
>>>>>>>>>> libraries of the same.
>>>>>>>>>>
>>>>>>>>>> So Axiom Sane has a steep learning curve.
>>>>>>>>>>
>>>>>>>>>> I've taken 10 courses at CMU and spent the last 4-5 years
>>>>>>>>>> learning to read the leading edge literature (also known
>>>>>>>>>> as "greek studies", since every paper has pages of greek).
>>>>>>>>>>
>>>>>>>>>> I'm trying to unify computer algebra and proof theory into a
>>>>>>>>>> "computational mathematics" framework. I suspect that the only
>>>>>>>>>> way this system will ever be useful is after Universities have a
>>>>>>>>>> "Computational Mathematics" major course of study and degree.
>>>>>>>>>>
>>>>>>>>>> Creating a new department is harder than creating Axiom Sane
>>>>>>>>>> because, you know, ... people.
>>>>>>>>>>
>>>>>>>>>> I think such a department is inevitable given the deep and wide
>>>>>>>>>> impact of computers, just not in my lifetime. That's ok. When I
>>>>>>>>>> started programming there was no computer science degree.
>>>>>>>>>>
>>>>>>>>>> Somebody has to be the first lemming over the cliff.
>>>>>>>>>>
>>>>>>>>>> Tim
>>>>>>>>>>
>>>>>>>>>> On 1/9/20, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>>>>> When Axiom Sane is paired with a proof checker (e.g. with Lean)
>>>>>>>>>>> there is a certain amount of verification that is involved.
>>>>>>>>>>>
>>>>>>>>>>> Axiom will provide proofs (presumably validated by Lean) for its
>>>>>>>>>>> algorithms. Ideally, when a computation is requested from Lean
>>>>>>>>>>> for a GCD, the result as well as a proof of the GCD algorithm is
>>>>>>>>>>> returned. Lean can the verify that the proof is valid. But it is
>>>>>>>>>>> computationally more efficient if Axiom and Lean use a
>>>>>>>>>>> cryptographic
>>>>>>>>>>> hash, such as SHA1. That way the proof doesn't need to be
>>>>>>>>>>> 'reproven', only a hash computation over the proof text needs to
>>>>>>>>>>> be performed. Hashes are blazingly fast. This allows proofs to be
>>>>>>>>>>> exchanged without re-running the proof mechanism. Since a large
>>>>>>>>>>> computation request from Lean might involve many algorithms
>>>>>>>>>>> there would be considerable overhead to recompute each proof.
>>>>>>>>>>> A hash simplifies the issue yet provides proof integrity.
>>>>>>>>>>>
>>>>>>>>>>> Tim
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 1/9/20, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>>>>>> Provisos.... that is, 'formula SUCH pre/post-conditions'
>>>>>>>>>>>>
>>>>>>>>>>>> A computer algebra system ought to know and ought to provide
>>>>>>>>>>>> information about the domain and range of a resulting formula.
>>>>>>>>>>>> I've been pushing this effort since the 1980s (hence the
>>>>>>>>>>>> SuchThat domain).
>>>>>>>>>>>>
>>>>>>>>>>>> It turns out that computing with, carrying, and combining this
>>>>>>>>>>>> information is difficult if not impossible in the current
>>>>>>>>>>>> system.
>>>>>>>>>>>> The information isn't available and isn't computed. In that
>>>>>>>>>>>> sense,
>>>>>>>>>>>> the original Axiom system is 'showing its age'.
>>>>>>>>>>>>
>>>>>>>>>>>> In the Sane implementation the information is available. It is
>>>>>>>>>>>> part of the specification and part of the proof steps. With a
>>>>>>>>>>>> careful design it will be possible to provide provisos for each
>>>>>>>>>>>> given result that are carried with the result for use in further
>>>>>>>>>>>> computation.
>>>>>>>>>>>>
>>>>>>>>>>>> This raises interesting questions to be explored. For example,
>>>>>>>>>>>> if the formula is defined over an interval, how is the interval
>>>>>>>>>>>> arithmetic handled?
>>>>>>>>>>>>
>>>>>>>>>>>> Exciting research ahead!
>>>>>>>>>>>>
>>>>>>>>>>>> Tim
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 1/3/20, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>>>>>>> Trusted Kernel... all the way to the metal.
>>>>>>>>>>>>>
>>>>>>>>>>>>> While building a trusted computer algebra system, the
>>>>>>>>>>>>> SANE version of Axiom, I've been looking at questions of
>>>>>>>>>>>>> trust at all levels.
>>>>>>>>>>>>>
>>>>>>>>>>>>> One of the key tenets (the de Bruijn principle) calls for a
>>>>>>>>>>>>> trusted kernel through which all other computations must
>>>>>>>>>>>>> pass. Coq, Lean, and other systems do this. They base
>>>>>>>>>>>>> their kernel  on logic like the Calculus of Construction or
>>>>>>>>>>>>> something similar.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Andrej Bauer has been working on a smaller kernel (a
>>>>>>>>>>>>> nucleus) that separates the trust from the logic. The rules
>>>>>>>>>>>>> for the logic can be specified as needed but checked by
>>>>>>>>>>>>> the nucleus code.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I've been studying Field Programmable Gate Arrays (FPGA)
>>>>>>>>>>>>> that allow you to create your own hardware in a C-like
>>>>>>>>>>>>> language (Verilog). It allows you to check the chip you build
>>>>>>>>>>>>> all the way down to the transistor states. You can create
>>>>>>>>>>>>> things as complex as a whole CPU or as simple as a trusted
>>>>>>>>>>>>> nucleus. (youtube: Building a CPU on an FPGA). ACL2 has a
>>>>>>>>>>>>> history of verifying hardware logic.
>>>>>>>>>>>>>
>>>>>>>>>>>>> It appears that, assuming I can understand Bauers
>>>>>>>>>>>>> Andromeda system, it would be possible and not that hard
>>>>>>>>>>>>> to implement a trusted kernel on an FPGA the size and
>>>>>>>>>>>>> form factor of a USB stick.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Trust "down to the metal".
>>>>>>>>>>>>>
>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 12/15/19, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>>>>>>>> Progress in happening on the new Sane Axiom compiler.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Recently I've been musing about methods to insert axioms
>>>>>>>>>>>>>> into categories so they could be inherited like signatures.
>>>>>>>>>>>>>> At the moment I've been thinking about adding axioms in
>>>>>>>>>>>>>> the same way that signatures are written, adding them to
>>>>>>>>>>>>>> the appropriate categories.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> But this is an interesting design question.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Axiom already has a mechanism for inheriting signatures
>>>>>>>>>>>>>> from categories. That is, we can bet a plus signature from,
>>>>>>>>>>>>>> say, the Integer category.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Suppose we follow the same pattern. Currently Axiom
>>>>>>>>>>>>>> inherits certain so-called "attributes", such as
>>>>>>>>>>>>>> ApproximateAttribute,
>>>>>>>>>>>>>> which implies that the results are only approximate.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> We could adapt the same mechnaism to inherit the Transitive
>>>>>>>>>>>>>> property by defining it in its own category. In fact, if we
>>>>>>>>>>>>>> follow Carette and Farmer's "tiny theories" architecture,
>>>>>>>>>>>>>> where each property has its own inheritable category,
>>>>>>>>>>>>>> we can "mix and match" the axioms at will.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> An "axiom" category would also export a function. This
>>>>>>>>>>>>>> function
>>>>>>>>>>>>>> would essentially be a "tactic" used in a proof. It would
>>>>>>>>>>>>>> modify
>>>>>>>>>>>>>> the proof step by applying the function to the step.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Theorems would have the same structure.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> This allows theorems to be constructed at run time (since
>>>>>>>>>>>>>> Axiom supports "First Class Dynamic Types".
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> In addition, this design can be "pushed down" into the Spad
>>>>>>>>>>>>>> language so that Spad statements (e.g. assignment) had
>>>>>>>>>>>>>> proof-related properties. A range such as [1..10] would
>>>>>>>>>>>>>> provide explicit bounds in a proof "by language definition".
>>>>>>>>>>>>>> Defining the logical properties of language statements in
>>>>>>>>>>>>>> this way would make it easier to construct proofs since the
>>>>>>>>>>>>>> invariants would be partially constructed already.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> This design merges the computer algebra inheritance
>>>>>>>>>>>>>> structure with the proof of algorithms structure, all under
>>>>>>>>>>>>>> the same mechanism.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 12/11/19, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>>>>>>>>> I've been reading Stephen Kell's (Univ of Kent
>>>>>>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.cs.kent.ac.uk_people_staff_srk21_&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=0SL3F3KHh9R1lV_IrJ0LmINrn_DSMjMq5xsNk1_eii0&e=
>>>>>>>>>>>>>>> ) on
>>>>>>>>>>>>>>> Seven deadly sins of talking about "types"
>>>>>>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.cs.kent.ac.uk_people_staff_srk21__blog_2014_10_07_&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=GOMXhymTlK2T6dt62fTbqv-K98dBQv0oMmB82kE8mXo&e=
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> He raised an interesting idea toward the end of the essay
>>>>>>>>>>>>>>> that type-checking could be done outside the compiler.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I can see a way to do this in Axiom's Sane compiler.
>>>>>>>>>>>>>>> It would be possible to run a program over the source code
>>>>>>>>>>>>>>> to collect the information and write a stand-alone type
>>>>>>>>>>>>>>> checker. This "unbundles" type checking and compiling.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Taken further I can think of several other kinds of checkers
>>>>>>>>>>>>>>> (aka 'linters') that could be unbundled.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> It is certainly something to explore.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On 12/8/19, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>>>>>>>>>> The Axiom Sane compiler is being "shaped by the hammer
>>>>>>>>>>>>>>>> blows of reality", to coin a phrase.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> There are many goals. One of the primary goals is creating a
>>>>>>>>>>>>>>>> compiler that can be understood, maintained, and modified.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> So the latest changes involved adding multiple index files.
>>>>>>>>>>>>>>>> These are documentation (links to where terms are mentioned
>>>>>>>>>>>>>>>> in the text), code (links to the implementation of things),
>>>>>>>>>>>>>>>> error (links to where errors are defined), signatures (links
>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>> the signatures of lisp functions), figures (links to
>>>>>>>>>>>>>>>> figures),
>>>>>>>>>>>>>>>> and separate category, domain, and package indexes.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> The tikz package is now used to create "railroad diagrams"
>>>>>>>>>>>>>>>> of syntax (ala, the PASCAL report). The implementation of
>>>>>>>>>>>>>>>> those diagrams follows immediately. Collectively these will
>>>>>>>>>>>>>>>> eventually define at least the syntax of the language. In
>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>> ideal, changing the diagram would change the code but I'm
>>>>>>>>>>>>>>>> not that clever.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Reality shows up with the curent constraint that the
>>>>>>>>>>>>>>>> compiler should accept the current Spad language as
>>>>>>>>>>>>>>>> closely as possible. Of course, plans are to include new
>>>>>>>>>>>>>>>> constructs (e.g. hypothesis, axiom, specification, etc)
>>>>>>>>>>>>>>>> but these are being postponed until "syntax complete".
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> All parse information is stored in a parse object, which
>>>>>>>>>>>>>>>> is a CLOS object (and therefore a Common Lisp type)
>>>>>>>>>>>>>>>> Fields within the parse object, e.g. variables are also
>>>>>>>>>>>>>>>> CLOS objects (and therefore a Common Lisp type).
>>>>>>>>>>>>>>>> It's types all the way down.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> These types are being used as 'signatures' for the
>>>>>>>>>>>>>>>> lisp functions. The goal is to be able to type-check the
>>>>>>>>>>>>>>>> compiler implementation as well as the Sane language.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> The parser is designed to "wrap around" so that the
>>>>>>>>>>>>>>>> user-level output of a parse should be the user-level
>>>>>>>>>>>>>>>> input (albeit in a 'canonical" form). This "mirror effect"
>>>>>>>>>>>>>>>> should make it easy to see that the parser properly
>>>>>>>>>>>>>>>> parsed the user input.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> The parser is "first class" so it will be available at
>>>>>>>>>>>>>>>> runtime as a domain allowing Spad code to construct
>>>>>>>>>>>>>>>> Spad code.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> One plan, not near implementation, is to "unify" some
>>>>>>>>>>>>>>>> CLOS types with the Axiom types (e.g. String). How
>>>>>>>>>>>>>>>> this will happen is still in the land of design. This would
>>>>>>>>>>>>>>>> "ground" Spad in lisp, making them co-equal.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Making lisp "co-equal" is a feature, especially as Spad is
>>>>>>>>>>>>>>>> really just a domain-specific language in lisp. Lisp
>>>>>>>>>>>>>>>> functions (with CLOS types as signatures) would be
>>>>>>>>>>>>>>>> avaiable for implementing Spad functions. This not
>>>>>>>>>>>>>>>> only improves the efficiency, it would make the
>>>>>>>>>>>>>>>> BLAS/LAPACK (see volume 10.5) code "native" to Axiom.
>>>>>>>>>>>>>>>> .
>>>>>>>>>>>>>>>> On the theory front I plan to attend the Formal Methods
>>>>>>>>>>>>>>>> in Mathematics / Lean Together conference, mostly to
>>>>>>>>>>>>>>>> know how little I know, especially that I need to know.
>>>>>>>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.andrew.cmu.edu_user_avigad_meetings_fomm2020_&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=giWJNgv9oeh8Aj_giZkHCx-3GFVk62hxr53YKr4naRk&e=
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On 11/28/19, Jacques Carette <carette@mcmaster.ca> wrote:
>>>>>>>>>>>>>>>>> The underlying technology to use for building such an
>>>>>>>>>>>>>>>>> algebra
>>>>>>>>>>>>>>>>> library
>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>> documented in the paper " Building on the Diamonds between
>>>>>>>>>>>>>>>>> Theories:
>>>>>>>>>>>>>>>>> Theory Presentation Combinators"
>>>>>>>>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.cas.mcmaster.ca_-7Ecarette_publications_tpcj.pdf&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=5QO0O72zl3hFmW3ryVeFoBjl0AZs2cuQZhKuIxk8NUw&e=
>>>>>>>>>>>>>>>>> [which
>>>>>>>>>>>>>>>>> will
>>>>>>>>>>>>>>>>> also be on the arxiv by Monday, and has been submitted to a
>>>>>>>>>>>>>>>>> journal].
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> There is a rather full-fledged prototype, very well
>>>>>>>>>>>>>>>>> documented
>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__alhassy.github.io_next-2D700-2Dmodule-2Dsystems_prototype_package-2Dformer.html&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=9fnfoSWyT66oQoIb4gKAYpCE7JjANqxHquwJdRdo2Uk&e=
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> (source at
>>>>>>>>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_alhassy_next-2D700-2Dmodule-2Dsystems&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=Z-d1Pn1slXyiP2l23mZBB5fBQOj0-Q48CUKRS1VNLao&e=
>>>>>>>>>>>>>>>>> ).
>>>>>>>>>>>>>>>>> It
>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>> literate source.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The old prototype was hard to find - it is now at
>>>>>>>>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_JacquesCarette_MathScheme&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=pkDi0LOAAPefRjcwvjwNNI3BVzNgJDITFQRpkFBgg8c&e=
>>>>>>>>>>>>>>>>> .
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> There is also a third prototype in the MMT system, but it
>>>>>>>>>>>>>>>>> does
>>>>>>>>>>>>>>>>> not
>>>>>>>>>>>>>>>>> quite
>>>>>>>>>>>>>>>>> function properly today, it is under repair.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The paper "A Language Feature to Unbundle Data at Will"
>>>>>>>>>>>>>>>>> (https://urldefense.proofpoint.com/v2/url?u=https-3A__alhassy.github.io_next-2D700-2Dmodule-2Dsystems_papers_gpce19-5Fa-5Flanguage-5Ffeature-5Fto-5Funbundle-5Fdata-5Fat-5Fwill.pdf&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=Rui27trbws4VTZL5B0zits8pEczWsib7Q7_mxyRIxhk&e=
>>>>>>>>>>>>>>>>> )
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> is also relevant, as it solves a problem with parametrized
>>>>>>>>>>>>>>>>> theories
>>>>>>>>>>>>>>>>> (parametrized Categories in Axiom terminology) that all
>>>>>>>>>>>>>>>>> current
>>>>>>>>>>>>>>>>> systems
>>>>>>>>>>>>>>>>> suffer from.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Jacques
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On 2019-11-27 11:47 p.m., Tim Daly wrote:
>>>>>>>>>>>>>>>>>> The new Sane compiler is also being tested with the Fricas
>>>>>>>>>>>>>>>>>> algebra code. The compiler knows about the language but
>>>>>>>>>>>>>>>>>> does not depend on the algebra library (so far). It should
>>>>>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>>>> possible, by design, to load different algebra towers.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> In particular, one idea is to support the "tiny theories"
>>>>>>>>>>>>>>>>>> algebra from Carette and Farmer. This would allow much
>>>>>>>>>>>>>>>>>> finer grain separation of algebra and axioms.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> This "flexible algebra" design would allow things like the
>>>>>>>>>>>>>>>>>> Lean theorem prover effort in a more natural style.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On 11/26/19, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>> The current design and code base (in bookvol15) supports
>>>>>>>>>>>>>>>>>>> multiple back ends. One will clearly be a common lisp.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Another possible design choice is to target the GNU
>>>>>>>>>>>>>>>>>>> GCC intermediate representation, making Axiom "just
>>>>>>>>>>>>>>>>>>> another front-end language" supported by GCC.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> The current intermediate representation does not (yet)
>>>>>>>>>>>>>>>>>>> make any decision about the runtime implementation.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On 11/26/19, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>> Jason Gross and Adam Chlipala ("Parsing Parses")
>>>>>>>>>>>>>>>>>>>> developed
>>>>>>>>>>>>>>>>>>>> a dependently typed general parser for context free
>>>>>>>>>>>>>>>>>>>> grammar
>>>>>>>>>>>>>>>>>>>> in Coq.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> They used the parser to prove its own completeness.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Unfortunately Spad is not a context-free grammar.
>>>>>>>>>>>>>>>>>>>> But it is an intersting thought exercise to consider
>>>>>>>>>>>>>>>>>>>> an "Axiom on Coq" implementation.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]