[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: guile-json 0.2.0 released

From: Panicz Maciej Godek
Subject: Re: guile-json 0.2.0 released
Date: Fri, 5 Apr 2013 11:41:01 +0200

2013/4/5 Daniel Hartwig <address@hidden>
On 4 April 2013 20:06, Panicz Maciej Godek <address@hidden> wrote:
> There are, however, situations, when one wants to have an ordered set,
> and it's good to have choice. Clojure, for instance, offers such choice, and
> from the perspective of a programmer it's better to have a choice.

Note that Scheme provides the essential building blocks to create such
an ordered map data type.  It is rather simple, at most a few dozen
lines.  Further, you can define the ordering semantics to precisely
suit your use case.  No doubt for every user of an ordered map they
have something else in mind, i.e. insertion-order, key-order, when to
perform the ordering (bulk insert followed by sort, or sort as

One of the main distinguishing points of Scheme is that it does not
attempt to provide absolutely everything you could ever need ready
to use, but instead gives a powerful set of well-defined tools so
that you can construct solutions to any problem.

I certainly wouldn't want Guile to provide *absolutely* everything; I rather think that it is a matter of a proper weight shift. Guile's already went away from the minimalistic spirit of Scheme, e.g. by introducing keyword arguments in the core -- despite the fact that everyone could implement their own keyword argument mechanism if they wanted to. However, although this feature is unnecessary, it's good that it's done in an efficient and uniform way, because it's very handy.
>From the perspective of a Schemer, its better to have _tools_.  If you
want a language where absolutely every feature possible is living in
the box ready to go, then you have plenty of choices for those.

No language provides absolutely every feature possible, and there'll never be such a language, because features stem from applications, which are a subject of human invention and are not known a priori.
Rather than just adding feature atop feature, the objective is to
isolate the few core functions that truely add utility to the
programming environment, then provide those.

I certainly agree with you on this one. The question is about the nature of that utility. Programming languages are not only about programming, but also about communication, and that's why I think that the common things should be done in a common way, instead of pushing programmers towards idiosyncracies. This regards especially a language that aspires to be ubiquitous.
>> > - secondly, there is no default notation to create hash tables nor
>> > sets; using them forces
>> > a programmer to drop homoiconicity, as their default print
>> > representation is #<hash-table 1c8a940 1/31> or something even uglier.
>> > I think that this is done properly in Clojure.
>> That is not what homoiconicity means.  There are more data types that
>> lack a proper external representation; most notably procedures.  For
>> transmission of data, converting to an alist and back is probably good
>> enough; this can also be used as a "hack" for having "literal"
>> dictionaries in code: (alist->dictionary '(...))

Hash tables are not just a set of (key, value) pairs, they also
include the particular hash and equality procedures that are used with
them.  These could be arbitrary procedures, and procedures can not
generally be converted to a string and back again, so, by extension,
neither can hash tables even if you could do that for their content.

It would be misleading to provide a write format that appears to be
read syntax.

Again, it is a discussion between generality and commonness. For a practical language, common tasks should be easy.

> Of course it can. However it's not convenient. I use emacs+geiser and when I
> want to see the content of a variable -- if it's a list or some other basic
> type -- I just point a cursor on it and I get the value in a minibuffer.
> When I want to see the content of hash-table, I need to explicitly evaluate
> (hash-map->list cons my-hash-table), which seems unnecessary When a
> hash-table is nested, it turns into a nightmare.

So hook your tools to do that automatically when the value at point is
a hash table.  You can take on the arbitrary performance penalty.
Please, no patches to change geisers current behaviour.

Well, geiser does a quite simple job -- it displays the print representation of an evaluated _expression_ in the minibuffer. And it would be enough to change that representation in guile to make it work. It works easily with geiser+racket, where one can write, for instance
(hash-ref #hash((a . 1)(b . 2)) 'a) ===> 1
(I'm not saying that I like that syntax/representation, but it's definitely more convinient than in guile) 

> On the other hand, what are the argumets against making hash-tables, vectors
> et al. applicable, assuming that "programming languages should be designed
> not by piling feature on top of feature, but by removing the weaknesses and
> restrictions that make additional features appear necessary"?

Erm, your quote seems to argue against your own position?

Maybe it does to you, but to me it's not so obvious.
The fact that various objects can't be applicable can be perceived as a weakness and restriction, because it decreases the number of acceptable forms.
Especially when we confront it with GOOPS. Why, for instance, can't we write
(define-generic (call (v <vector>) (i <integer>)) (vector-ref v i))
and then use it like that: (#(0 1 2) 1) ===> 1
That would be a general mechanism, which would increase the number of ways in which the language can be used.

Any applicable (“message passing”) interface is going to wrap
procedures that perform the lookup, update, etc..  Since those
procedures must be present anyway, it is those procedures provided by
the Guile API.  This is the more natural interface.

Again, you have the tools to build a message passing interface from
the tools provided.  It is a trivial task, but does not add value to
the set of tools provided.

I don't think that's entirely true. I could wrap, for instance, a vector in a closure, but then I loose the advantage of its print representation and read syntax. Unless you know how to achieve it without loosing those advantages.

>> > - lastly, guile's support for hash tables is limited -- there ain't
>> > even a built-in function that would return the size of a hash-table.
>> > My implementation is inefficient (linear time), and it looks like
>> > this:
>> > (define (hash-size hash-map)
>> > (length (hash-values hash-map)))

Question: what _value_ does that information add to your algorithm?

I'm not sure if I get your question right. Are you asking why did I need to implement the hash-size procedure? 
I was implementing a client/server protocol. The packages were s-exps sent over UDP sockets, so I could parse them with plain read. However, the size of UDP packages was limited, so I needed a mechanism that would allow me to split one set of data into several packages.
First, an initial package was sent that looked like this:
`(begin-transaction ,id ,count)
where id is specific for a set of packages. Then, subsequent packages were sent that looked like this
`(transaction ,id ,n ,data),
where n was the ordinal number of the package.
I made no assumptions about the order in which the packages arrive, and it could happen, that the initial package arrived after some of the data packages -- therefore the number of packages in a transaction could be known after first data package was sent. That's why I couldn't use a vector. Of course, I could synchronize with the initial package, and then use vector, but then I'd need an additional variable to know how many packages were received. I decidede to use a hash also because it knows how many elements it contains, and it would work easily even for duplicating packages. It turned out, however, that getting that number is non-trivial.
I don't know if that answers your question. 

Best regards,

reply via email to

[Prev in Thread] Current Thread [Next in Thread]