axiom-developer
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Axiom-developer] Curiosities with Axiom mathematical structures


From: William Sit
Subject: Re: [Axiom-developer] Curiosities with Axiom mathematical structures
Date: Tue, 14 Mar 2006 03:05:42 -0500


Gabriel Dos Reis wrote:
> 
> William Sit <address@hidden> writes:
> 
> | Hi Gabe:
> |
> | Gabriel Dos Reis wrote:
> | > William Sit <address@hidden> writes:
> | >
> | > |  "Bill Page" <address@hidden> writes:
> | > |
> | > | > | I don't think there is any essential reason why SemiGroup and
> | > | > | Monoid could not be implemented in the way you suggest. For
> | > | > | example:
> | > | > |
> | > | > | )abbrev category SGROUP SemiGroup
> | > | > | SemiGroup(m:Symbol): Category == SetCategory with
> | > | > |       m: (%,%) -> %    ++ returns the product of x and y.
> | > | > |       associative(m)
> | > | > |
> | > | > | )abbrev category ABELSQ AbelianSemiGroup
> | > | > | AbelianSemiGroup(m:Symbol): Category == SemiGroup(m) with
> | > | > |       abelian(m)
> | > | > |
> | > |
> | > | Yes, there are no theoretical reasons, but there are plenty of
> | > | practical ones.
> |
> | > In fact, practicality dictates that the implementations in Axiom/Aldor
> | > closely follow the mathematical structures.
> |
> | You probably misread my response to Bill.
> 
> After re-reading the long thread that followed, I do not believe so.
> 
> I think I correctly read your answer and I correctly deduced a fundamental
> mismatch between your conception of what a monoid is and should be
> written and the demand of writing a computer program to express
> mathematical structure.
> 
> | I was saying there are plenty of practical reasons NOT to implement
> | in the way suggested, but no theoretical reasons NOT to implement
> | the goal you suggested, in some way.
> 
> Sorry, none of the reasons you enumerated were "practical".
> They look to me as a choice that "looked good a the time", but turned
> out to be broken but a heartily desire to stick to it instead of
> evolving.   It looks to me more ideological than practical.
> 
> It negates practice.
> 
> | > For example, the only
> | > assumption  I need to define the power of an element is that its domain
> | > has a monoidal structure.  From software engineering point of view,
> | > Practicality dictates that I should not have to write duplicate codes,
> | > one for additive operation, one for multiplicative operation when the
> | > underlying mathematical structure is the same.  That is
> | > the least I expect from a non-mathematically-oriented system.
> |
> | Agreed in theory, not in practice.
> 
> Wrong; I'm talking of practical issues.

I think we have different notions about what "practical" means. 
 


> | We should distinguish two issues: (1)
> | derived  operations that depend only on the defining operations should be
> | generically implemented and inherited, and (2) how to handle the notations
> | (equivalently, identifiers in computer science terminology) for the defining
> | operations and derived operations.  Notations (resp. identifiers) are 
> extremely
> | important in mathematics(resp. computer programming). Mathematics can only
> | progress and make strides when good notations are invented AND agreed to.
> 
> notation is important, and so is semantics.  When syntax takes precedence over
> semantics, the result is empty sophism and boring rethorics.

> | Examples are the notations for differentiation and integration.
> 
> Sorry, they do not illustrate in a anyway why a AbelianMonoid is not a
> Monoid.

They weren't meant to.
 


> | A software system that deviates from the universally accepted
> | mathematical notations, even if for the sake of valid ideals,  will
> | not be usable and at best be hardly usable.
> 
> Is it more important to derive from alleged universal notation (when
> in fact the notation is highly ambiguous, and the universality is just
> mirage) than derive from universal semantics?  Which mathematical
> authority do know and universally agreed on that do not recognize an
> Abelian monoid as a monoid?

I was referring to the notation for addition in a ring. This does not mean every
abelian monoid must use + for its operation! I understand you point for
detaching the notation for the operation from the operation itself in regard to
its role in the algebraic structure,  but can you really compute using the
operation without a notation (or function name)?



> | Computer programs can work correctly only if identifiers are scoped
> | correctly and such scoping should be transparent when identifiers are
> | overloaded.
> 
> That is what you claim, and yet we have no evidence that Axiom/Aldor
> have "correctly scoped identifiers".  Furthermore, we have to see
> evidence of why it is desired that such "scoping should be transparent
> when identifiers are overloaded."  As a matter of fact, we have
> examples (in this thread) that suggest that the transparency is far
> from being what we need.

'we have no evidence that Axiom/Aldor have "correctly scoped identifiers"' does
not mean there is evidence that Axiom/Aldor incorrectly scope identifiers, and
does not contradict what I claim, which is not specific to Axiom/Aldor.  And
"having examples that suggest the  transparency is far from being what we need"
is in itself an admission that tranparency is desirable. 




> | The concept of a ring is almost omnipotent in all branches of
> | mathematics, and the standard notations for the multiplication (*)
> | and addition (+) are adopted.
> 
> You are missing an important fact: most of the mathematical texts use
> notation invented when computer programs were not the primary focus of
> the discourse when where context helped *humans* to automatically
> disambiguate when more operations are involved.

Historic use is not the reason. We are quite capable of using any other
substituted notations. But a commonly accepted notation and distinguishing
different operations by different names or notations facilitate communication.
This ease of communication should have a high priority and design goal, whether
it is between humans, between programs, or between humans and programs.  




> | You may almost forget about that a ring is a monoid wrt to * and an
> | abelion monoid wrt + because what is important to a ring is the two
> | together, how they interact via the distributive laws.
> 
> No, you're wrong: I did not forget that.

Sorry. Don't take things personally. Here "You may" meant "One may as well".



> | A practical way to support the structure of a ring requires these
> | two separate notations.
> 
> Wrong: a practical way to support them is not to prejudge of syntax in
> more general contexts, and stick to the *algebraic structure* being
> modeled.

My "practical" is not the same as your "practical". You meant "general" and I
meant "common", "easy to use and understand". Neither is wrong. It's just a
different perspective. If you can support a ring structure by sticking to the
algebraic structure only, fine, as long as you don't make programming more
difficult for those who want to use * and +.




> | Even in the simple case of your example for repeated operation
> | (which you called "power"), the two operations * and + will produce
> | (and should produce) outputs in two different notations: x^n (power)
> | and n*x (multiple). In this case, the repeated operations in fact
> | occur both in an *abelian* submonoid setting even if multiplication is
> | non-commutative.
> 
> Well, unless you're a proposing a new mathematics you reduction is
> again wrong -- because you're prejudging semantics.  Again take
> 
>     f (x, y) = x * y + x + y
> 
> on integers with usual addition and multiplication, then (N, f, 0) is
> another Abelian monoid structure.  Computing the repeated operation
> does not yield n * x or x^n.  Please stop confusing syntax with the
> *structure*.

Good, let's just talk mathematics :-).  Please do not confuse notation with
semantics. I was only stating that the mathematical convention is to denote the
n-fold self multiplication by using an exponential notation, whereas the n-fold
self addition uses a scalar multiplication notation. I place no meaning
whatsoever on the multiplication * or addition + operations. In your example (N,
f, 0), if one uses * to stand for f, one would still use x^n for the n-fold
operation. But of course, that would be confusing because * is already used to
mean the usual multiplication. So one has to use a different notation. Since
*you* are creating this new operation, *you* can use any notation you like (I am
talking about notation as a shorthand for the n-fold product, not what the
result should be, as a polynomial function in x).

Back to computing: the question is how are you going to make Axiom understand
that (N,f,0) is an AbelianMonoid? (Never mind that it is also a Monoid). This is
a technical problem that so far no one has offered a solution. In fact, there is
no need to bring the derived function 'power' into the structure of a monoid.
The function 'power' can be implemented in a package, and parametrized with any
binary operation with a calling sequence like power(f, n, x).




> [...]
> 
> | So I hope we need not argue a third issue: (3) maintaining standard
> | notations for the defining operations in common algebraic structures.
> 
> but standard notation of what?  

Didn't I say: "standard notations for the defining operations in common
algebraic structures"? But clearly, you would take issue.




> Computer programs have very different
> requirements from free mathematical discourse.

I agree (but I would prefer using "limitations" for "requirements"), but what's
your point?




> | I think your ideal of avoiding "duplicate codes" (much like the
> | ideal of sharing dll libraries) is misplaced.
> 
> Please, if you want to discuss shared dll libraries, let's do that in
> a different thread.  The issue at hand is too important to be left
> diluted in questionable analogies.

Do you have to be so picky? (Below, you raised the OO issues.)
 



> | In order to avoid duplicating code due to the dual use of * and + in
> | the monoid structures, one will have to, as admitted throughout
> | these discussions by everyone, ADD tons of code and in the end
> | making writing software for a ring more difficult, not less. Not
> | only that, I doubt if the experts here could agree to how that
> | additional code should be written, even if everything were started
> | from scratch.
> 
> Please, what are you tlaking about?

Read earlier posts, or just these quotes from below:

  > The proposed change certainly has its set of problems [...] 
  > To start with, the proposal is not
  > complete :-)

  > | If you think now one can redo Axiom in a better way, we are all for
  > | it. What's your priorities? (in other words, when can you start? :-)
  > 
  > What about give me unrestricted access to Adlro compiler sources? ;-)
  > (and I'm serious)




> | I think a proper way to "allow" the kind of identification of underlying
> | mathematical structures that are the same is the concept of
> | "isomorphism".
> 
> But not all structrures are isomorphic, even in the same
> (mathematical) category.  We do not need to recover from yet another
> set of misguided and misplaced isomorphims.

Did I say or imply that any two monoids are isomorphic? 




> [...]
> 
> | Let's say in Monoid, I used & for the defining monoid operation. If I want 
> to
> | form a monoid domain with set Integer and operation #, (Integer, #), I would
> | have to specify to the compiler an isomorphism.  The compiler would have to
> | execute this isomorphism by replacing each occurrence of '&' by '#' in the
> | category definition temporarily before matching the domain
> | signatures with the category signatures. This necessitates that in
> | the category definition of a monoid, and in the domain code of an
> | actual monoid, we can identify and separate the defining operations
> | from the derived operations -- which is currently not the
> | case. Let's assume this is done nonetheless.
> 
> You are using confusing terminologies to say exactly what I'm proposing.

Unfortunately, English is such that what is confusing to one may be clear to
another. If my language is confusing, how can you deduce that I am saying
"exactly what I'm [you are] proposing"? (In fact, it looks like I was not).




> | Then the generic code of derived operations (such as 'square' or
> | 'power'), if available, will have to be translated, duplicated, and
> | inserted in the domain for correct signatures and implementation.
> 
> I disagree.  There is no inherent reason why they should be
> duplicated.  The operation '&' is a parameter! 

If you read more carefully, you will note that I did not say '&' is a parameter.




>  Whether the
> instantiation should be through copy-and-build or sharing is a
> completely different issue.

Do you mean "compilation" instead of "instantiation"? Anyway, don't you agree
that a discussion of the mechanisms would be helpful?
 



> [...]
> 
> | > If the system does not let one do that, then the system is practically
> | > defective :-)
> |
> | All systems have defects. Be realistic!
> 
> No kidding.
> 
> I just release one last week.

You meant you just released a system with no defects? Provably?




> [...]
> 
> | > |  Indeed, how is a user to know what symbol was used, say, for the
> | > | operations? What if the user instantiates Integer with both * and +
> | > | for the same operations in two instances?
> | >
> | > When both will be in scope.  If the user uses * with Integer, the
> | > system knows that (*, Int) is a monoidal structure.  Same if
> | > (+, Int).
> |
> | I meant using * for multiplication of integer in one instantiation,
> | and using + for multiplication of integer in another instantiation,
> | two notations for the same set and operation. (I am not advocating
> | this! see original message for the context, please).
> |
> | > | Can a compiler or interpreter catch this?
> | >
> | > Yes, definitely.
> |
> | Really? The two copies of the multiplicative monoid of integers are
> | compiled at different times and instantiatiated in the SAME
> | interpreter session. Can the interpreter execute something like
> | 3*4+5 correctly and give 60 (without package calls)? What if we have
> | two copies of the ring of integers where in one the notations for *
> | and + are interchanged? When this sort of arbitrary notations is
> | allowed, every function call need to be a package call or else it
> | would be a nightmare to explain all the "strange" answers. (The
> | isomorphism mechanism does not seem to solve this problem).
> 
> Well, programming languages have been dealing with such issues for
> long time now.  Either, you permit nonsensical uses (because in
> general you can't prevent all of them and yet remain useful); 

Why is that considered "nonsensical"? If I package call each operation, it still
makes perfect sense. But it would be clumsy, that's all. The compiler is not the
culprit here. In the scenario when arbitrary notations are allowed, the compiler
behaves correctly and need not flag any errors or even warning (because the two
domains are compiled at different times). I won't prejudge such constructions as
not useful (for example, it could have a pedagogical value).




> or you
> do minimal checking when same operations should be defined
> consistently (programming languages like C++ calls it the "One
> Definition Rule", and do not require compilers to diagnose all of
> them, but there are known systems in production use that have
> implemented them for the vast majority part of it and can handle such
> cases).

Note that you brought up C++ (an OO language). I don't see how "One Definition
Rule" is related to this. In the above scenario, each operation has one
definition only per domain,  and the two copies of the ring of integers are
different domains.





> | > | If not, it would be a nightmare of bug reports.
> | >
> | > It would be a nightmare only if one takes the rules that a type has a
> | > unique algebraic structure.  That is both theoretically and practically
> | > false.  See the examples (+, NN), (*, NN), (NN, max) I gave earlier.
> |
> | Quite the contrary. When a type has a unique algebraic structure (I
> | presume you meant only default notations for its defining operators
> | are allowed),
> 
> No, please do take note that I'm distinguishing structure from syntax.

You mean using package calls? I agree that would let the compiler parse the
expression unambiguously.
 



> | there is no ambiguity possible like the 3*4+5 above.
> 
> Utimately, you have to agree that a symbol resolves to what it
> implements -- even mroe so in your OO-centric view of mathematical structures.

Only when the compiler is given all the information. BTW, I don't understand why
you label my view as OO-centric. You are the one who brought up C++, and use C++
syntax below. So your non OO-centric view is to detach any object from a
mathematical structure? (no the underlying set, no the operations? an
isomorphism class?)
 



> | I don't disagree this state is non-optimal, but the proposed state
> | has its problems. Your examples show three *different* binary
> | operations with three *different* identifiers.
> 
> Again, identifiers are not algebraic structures.

Identifiers are needed to name or notate the operations in an algebraic
structure. Would you care to give a non OO-centeric view definition of
"algebraic structure"?
 



> The proposed change certainly has its set of problems -- but nothing of
> what you said above is about them.  To start with, the proposal is not
> complete :-)

I thought we are all exploring the issues here. 
 



> | If you want to inherit say the function 'square' for each, you need
> | to, as explained above, establish three isomorphisms to the
> | fictitious domain % in the categorical definition and translate,
> 
> Please do notice that 'square' implicitly depends on the monoid
> operation passed as parameter to the monoid structure.  Consequently,
> if I wanted to have different *instances* of it simultaneously in the
> same scope, then I need to give different names to those *instances*
> If I want to have them simultaneously in the *same scope*, then
> obviously I need to give different names to their *instances*, e.g.
> 
>      doublePlus == Monoid(T, +).square
>      doubleTime == Monoid(T, *).square

In your OO-syntax, why would you need to rename the operations? The right hand
sides are already different (so your OO-syntax actually provides the correct
scope). In Axiom's case, the type of the operands dictiate the selection of the
operation. It is not enough to say x:T to apply square(x), you must have
x:Monoid(T,+) or x:Monoid(T,*). Once this is declared correctly, then x.square
(in OO-syntax) has no ambiguity.




> [...]
> 
> | It would be all the more of a problem when all the three monoid structures
> | co-exist in the *same* domain (multiple inheritance).
> 
> it would be a problem only if one sticks to an ill-suited OO-centric
> view.  Other models exist, and should be explored.

I wasn't the one who used OO-syntax. I was discussing only the Axiom model.
 



> | Notice that while the domain (NN, +, *, max) exists, the domains
> | (NN, +), (NN, *), and (NN, max) are transiently constructed during
> | compilation only! There is no (NN, +).NRLIB for example. Besides,
> | you can only have ONE signature 'square: % -> %' in the domain
> | (NN, +, *, max), not three.  This possibly is an important reasons why
> | AbelianMonoid does not descend from Monoid.
> 
> you're confusing the current ill-suited OO implementation with what
> should happen.  If there is no theoretical reason why that should
> happen, then you better off revise your implementation model.

What "ill-suited OO implementation"? and of what? I see, to you, there are only
algebraic structures, no domains. Sorry, that is not the Axiom I know.




> | There seems to be no good way to solve this multiple inheritance problem.
> 
> Abandon, OO and that ill-suited multiple inheritance stuff, when not
> helpful, -- e.g. here.

What has the problem to do with OO? What is "here"?




> [...]
> 
> | Needless to add, such automated translation, duplication and insertion are
> | simply not supported currently (see SandBoxMonoid).
> 
> No.  Please see above.

Here?





> [...]

> many systems were designed more than 30 years ago.  At the time, their
> designers tried to do the best possible at their time.  Those who
> refused to evolve died.  Since the 1970s advances have been made in
> understanding type systems; it would be singular that Axiom refuses to
> evolve and stick to OO-centric views we now know does not work well.
> We can't require unreasonable degree of foresight from designers, but
> it is unreasonable that the system refuses to evolve -- if it ever
> wanted to survive.

I don't dispute that wisdom. But survival (for software) depends on far too many
aspects than just a good design. Maple and Mathematica both survive.  
 



> [...]
> 
> | Coherence, if I understand what you meant by that, requires confluence in
> | rewriting rules.
> 
> No, you misunderstood what I was saying.

I'm listening ...




> [...]
> 
> | If you think now one can redo Axiom in a better way, we are all for
> | it. What's your priorities? (in other words, when can you start? :-)
> 
> What about give me unrestricted access to Adlro compiler sources? ;-)
> (and I'm serious)
> 
> [...]
> 
> | > | I do not question the theorectical advantage of rebuilding all
> | > | algebra based on properties of operators (there is research in
> | > | theory of operads which would support such a design) but I doubt
> | > | their practicality, especially when the notation for the operators
> | > | can only be known dynamically at run-time.
> | >
> | > Well, I'm approaching the issue more from a *practical* point of view
> | > than a theoretical point of view.  As the system currently stands, in
> | > practice, I cannot simply and clearly write once a generic function
> | > for monoidal structures and expect it to work for both for Abelian and
> | > non Abelian monoids.
> |
> | This is not true. First, you don't mean non-Abelian monoids, you meant
> | not-necessarily Abelian monoids (or simply monoids).
> 
> I meant what I said.

A non-Abelian monoid means a monoid that has at least a pair of elements that do
not commute with respect to the monoid operation. But you sure can write a
package and your function with the monoid operation as a parameter. You do not
need to recognize an AbelianMonoid as a Monoid to do that. 




> | An abelian monoid IS also a monoid and hence it will work!
> 
> Oh really?
> 
> | (You just need to redefine AbelianMonoid to use * instead of +,
> | showing it is not an inherent weakness of the system but just a
> | matter of notations).
> 
> You are displaying an incoherent attitude in this matter, and at this
> pointI don't know how serious you're.

I am as serious as you are. The only reason why currently AbelianMonoid is not
descending from Monoid is because of notation! If you use * for all monoid
(abelian or not) operations, you can have an AbelianMonoid descend from Monoid.
(You can't do this multiple times in one domain, but that is a totally different
problem, one of multiple inheritance).




> 
>    f(x, y) = x * y + x + y
> 
> f uses both * and +.  Now, (N, f, 0) is an Abelian monoid, so you
> propose that in the same session I rename f to *, * to + and + to ?
> You're joking, right?

You just made my point. You already used * and + as Axiom convention now
dictates. And renaming them (which is an allowed consequence if you allow
passing the operation to Monoid as a parameter) would allow this type of crazy
twisting of notations. This is my objection of passing operators as parameters:
it can be abused.
 



> [...]
> 
> | > | As already well-known, with the current status, all properties of
> | > | operators are declarative and not verified.
> | >
> | > My problem is simpler than that.  I'm not asking for the definition of
> | > the algebraic properties of operators.  I'm trying to have a way to
> | > convince Axiom/Aldor to support sound software engineering practice.
> | > Even better if I can take the standard library as an example.
> |
> | No, you are not discussing "sound software engineering practice". You are
> | arguing about how to inherit abstract algebraic properties.
> 
> No.  The ill-suited inheritance stuff is just one irritating
> manifestation of "sound software engineering practice".  Don't shout
> the messenger.


Did I shout? Inheritance is a mathematical concept: to build a more advanced
algebraic structure based on a simpler one. It is not just an OO-concept. 




> [...]
> 
> | I understand your points. There are many monoid structures in any algebraic
> | object because monoid is a very simple structure.
> 
> No, the problem is not because it is a simple structure, it is because
> it is a *structure* and the current system confuses structure with
> syntax.  That is the root of the problem.

Wait a minute. Are you saying you can say X is a monoid without saying what
binary operation defined on X makes it a monoid? That operation IS part of the
structure. It has nothing to do with syntax if you only talk mathematically, You
still need to notate the operation if you want to do any computation. 




> [...]
> 
> | Have you seen ANY paper which interchanges the * and + notations in
> | a commutative ring?)
> 
> That is beside the point.  The issue is not whether * should be
> interchanged with +.  

Sorry, that is the whole point! I don't see, even in mathematics, how you can
talk about an abelian monoid and monoid without the operations. Sure, it does
not matter how you call the operation, but you cannot talk about it without
referring to it by some name. Even if you think about a monoid as a set with an
unnamed operation (much like a univariate polynomial does not need a name for
its main variable), when you have multiple monoid structures for the same
underlying set, you will need to refer to them by different names (again, much
like in multivariate polynomials, you need a set of variable names).




> The issue is how to simply design the system so
> that AbelianMonoid is indeed a Monoid.

Without worrying about notation, and just purely on the mathematics, that is
easy.
Unfortunately, that causes more problems than it solves. And, the problem and
design will be anything but simple. Its complexity lies not in the concept,
which is simple, and not even in a straight forward implementation, but is
because of the tangle with notation, inheritance, multiplie inheritance, etc.

William




reply via email to

[Prev in Thread] Current Thread [Next in Thread]