gnumed-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnumed-devel] re: posgres cursor protocol


From: Karsten Hilbert
Subject: Re: [Gnumed-devel] re: posgres cursor protocol
Date: Mon, 7 Jun 2004 18:51:55 +0200
User-agent: Mutt/1.3.22.1i

On Mon, Jun 07, 2004 at 12:02:14PM +1000, sjtan wrote:
> 
> For each cursor(),  curs.execute('..') , curs.fetchall()     there's  4 
> packets for cursor , and curs.execute  (   a cursor.declare, response, a 
> query declare, response),
> and 2 packets for fetchall( a cursor fetch,  a response packet with 
> data) and a cursor close (2 packets). So 8 packets per query, ...

    > ... "if one query per cursor."

This is completely wrong, conceptually. There only ever is one
query associated with one cursor. Cursors are NOT in any way
connections through which arbitrary queries are sent. Rather,
cursor are *alternative* results. The thing is you can tell
PostgreSQL "no, I don't want you to shower me with all the
result rows right away but rather want you to hand back a nice,
convenient cursor, oh and please name it >my_nice_cursor<". In
that case PostgreSQL keeps the query results *on the server*
and returns a cursor, eg. a cursor is pretty much the same as a
file handle ! We then tell the cursor to either .fetchone(),
fetchmany(), fetchall(), move(), close() etc. Now, in GnuMed
(particularly since it is hardcoded in run_ro_query()) we do
fetchall() in most cases. My previous suggestion for things
like get_lab_results() that might return arbitrarily large
lists of results was to somehow tie them to a cursor and NOT
do fetchall() on it but rather use fetchmany() et al.

> Maybe we should begin work on the xml-rpc server, or whatever middleware 
> is going to be used, just so showcasing  the public server will be a bonus.
Simply working on the xml-rpc server does not in any way cut
down retrieval time. There's no immediately obvious benefit
speedwise (not to think of the additional - if small -
overhead) since the queries need to be done anyways. Sure, the
xml-rpc server would do them locally but it would still hand
out lists or dicts, not Python objects. The real rub lies in
whether we retrieve a collection of data for *several* value
objects at once or whether we use a separate query for each
and every value object (as we do now). IF we want to work with
value objects (and I do) we won't get around parsing the data
into the object no matter how it gets there in the first
place. However, it most likely does make a difference whether
a bulk query retrieves all of a patients lab results at once
and caches them on the *client* or whether each and every
single value object connects to the database.

This is what I will be working on speedwise when I get around
to it. It's going to be transparent, however.

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346




reply via email to

[Prev in Thread] Current Thread [Next in Thread]