gnumed-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnumed-devel] Approaches to maintain clinical data uptime


From: James Busser
Subject: Re: [Gnumed-devel] Approaches to maintain clinical data uptime
Date: Wed, 3 May 2006 08:29:10 -0700


On May 3, 2006, at 5:59 AM, Karsten Hilbert wrote:

On Wed, May 03, 2006 at 01:04:08AM -0700, Jim Busser wrote:

- can Syan or Karsten, if the work involved is limited, attempt a
load test of writing the dump file, so we might know how long this
would require and what its size might be...
This is entirely dependant on the database content. The dump
file may grow huge, indeed (several GB, eventually)

this would mean it cannot be copied onto single CDs, hopefully they would fit a DVD else each dump would have to be split over several pieces of media, which would be an incredible workflow pain (and pertinent to office procedures).

Also pertinent to workflow is the amount of time required for the backup dump to be written to disk so that the media could be taken away. Sure, it can be written instead at 0200h. It wouldn't matter provided the dumps had been encrypted AND the purpose of that media was purely notary, but if it may have to be the source of restoration (because there was no slave, or because the master and slave got corrupted) then that media would have needed to be taken away "live". So where office activity might end at 5:30 pm daily, the amount of time required for the dump to be generated, notarized, encrypted and written to media (hopefully automated) becomes of great interest to the last employee who may need to be scheduled to wait 10 vs 20 vs 40 minutes to be able to leave, if the office policy is built around being able to take away the media. But if the dump does not depend on a "clean" work stoppage it could be scheduled to begin earlier, if any server performance penalty would be tolerable.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]