sks-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Sks-devel] Adding DB_INIT_LOCK to sks-keyserver (revisited)


From: Kim Minh Kaplan
Subject: Re: [Sks-devel] Adding DB_INIT_LOCK to sks-keyserver (revisited)
Date: Sat, 27 Feb 2010 11:26:30 +0000
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/23.1 (gnu/linux)

Jeff Johnson writes:

> I am talking about catastophic recovery, particularly
> in the sense of hardening, as in not having to reload
> an entire database, for certain types of failures.

The procedure for catastrophic is described in depth in the Berkeley DB
manual.  In particular if you want to be able to do this kind of
recovery you should not delete the log files unless you are really sure
they are not needed any more.  That being said, catastrophic recovery is
to be dealt mostly at the system administration and organisation
procedures level.  See [1] for more on this.

> Examine the size of your logs, and the size of the tables, in KDB/*.
>
> The logs should be approx. the same size (key material is rather 
> uncompressible)
> as the tables in order to guarantee *everything* can be recreated.

No.  Whatever the size of the database the logs start really small.
Then they grow as operations are committed until the next checkpoint.
Once the checkpoint is over the log files can be deleted (but should not
if you plan to do catastrophic recovery or some form of advanced
redudancy) and the cycle starts again.  So it is perfectly normal to
have small amount of log files if you remove unused ones.

> My logs (particularly after running
>       db_checkpoint -1
>       db_archive -dv
> are not sufficient to recreate the database in its entirety, just by
> looking at the size of the files involved.

This is normal and the expected outcome of db_checkpoint.  After
db_checkpoint you do not need any log files to recreate the database in
its entirety, the snapshot is sufficient.

> The definition for catastrophic recovery depends on the size of the logs that 
> are kept.

I use the definition of catastrophic recovery from Berkeley DB's
manual's chapter "Database and log file archival".  With this I can not
see any need to plan for catastrophic recovery of the prefix tree as it
can be constructed from scratch and *must* be kept synchronized with the
keys database: using a prefix tree that is not exactly the one
corresponding to the keys database sounds like a recipe for trouble.
That basically means that you can *not* use the catastrophic recovery
procedure.

The keys database could use some form of backup procedure.  The command
"sks dump" is a good one but currently it requires that you stop the
recon and db process.  One of the SKS server operator mentionned that
removing Dbenv.RECOVER from keydb.ml works fine and could permit to dump
the database without interrupting the server.

> What is the schema in use for the KDB tables? I'm looking
> for the {key,data} definitions for put operations performed
> on the tables in KDB in particular.

If memory serves me well, key is {key-hash, key-material}, keyid is
{key-id, key-hash}, word is {word, key-hash}.  The other databases I do
not know.

[1]
http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/transapp_checkpoint.html
http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/transapp_archival.html
http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/transapp_recovery.html
-- 
Kim Minh




reply via email to

[Prev in Thread] Current Thread [Next in Thread]