gnumed-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnumed-devel] GnuMed EMR browser


From: Karsten Hilbert
Subject: Re: [Gnumed-devel] GnuMed EMR browser
Date: Fri, 13 Aug 2004 10:16:36 +0200
User-agent: Mutt/1.3.22.1i

>    Maybe we could cache all retrieved tabular data in
> cClinicalRecord in a dictionary of pairs id : row.
The speedup isn't so much in caching query results one way or
the other (which we already do) but rather in single-row
fetches from within VOs as opposed to patient-constrained
bulk-fetches with middleware local VO instantiation.

Now, of course, both mechanisms make sense. I sure want to be
able to say:

    vacc = cVaccination(primary_key=216435)

which is possible right now and is plenty fast if dealing with
just a few known vaccination rows.

However, in many situation we also need this:

    lab_data = emr.get_lab_data()

where we might retrieve a few hundred lab data rows. In that
situation it is faster to say:

def get_lab_data():
    rows = get_lab_rows()
    for row in rows:
        cache['lab'].append(cLab(row_vals=row))

rather than:

def get_lab_data():
    ids = get_lab_ids()
    for id in ids:
        cache['lab'].append(cLab(pk=id))

where each cLab(pk=id) fetches from the database all on its
own.

Note, how clinical record API users see no functional
difference whatsoever.

IMO this is the direction we'd want to go with the VOs, eg.
wrapping the bulk fetches in methods that actually *are* bulk
*requests*.

Also note how this doesn't preclude use of an "async" fetcher
later on.

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346




reply via email to

[Prev in Thread] Current Thread [Next in Thread]