info-cvs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: CVS Repository On-Line Backup Capability i.e. Hot Backup


From: Schrum, Allan (Allan)
Subject: RE: CVS Repository On-Line Backup Capability i.e. Hot Backup
Date: Mon, 16 Feb 2004 09:45:31 -0500

Since you are recursing over all the directories, you seem to be creating a
list of files to archive. You can pass a list of files to tar using the "-I"
option ("-T" for GNU tar, I think). As long as the entry is not a directory,
it will not recurse yet will preserve the full path name (directory
hierarchy).

Also, have you thought about using CPIO which takes in filenames
and does not recurse?

Tar has a limit on the max length of the filenames it will archive. Cpio
does not seem to have that limit.

Both of these methods remove the need to have a script recreate the CVS
structure because the tar/cpio archive would have that.

Regards,

-Allan

-----Original Message-----
From: Conrad T. Pino [mailto:address@hidden
Sent: Friday, February 13, 2004 10:11 AM
To: Info CVS
Subject: CVS Repository On-Line Backup Capability i.e. Hot Backup


Hi All,
=================================================
After searching the info-cvs archive, I see from
time to time someone asking for an on-line backup
utility for CVS repositories.
=================================================
I'm considering developing a program based to do
on-line ("hot") backups of CVS repositories.  The
intended audience is different than CVSup users
since I want to save my backup on a Windows 2000
server for backup to tape and the backup product
is a gzipped tar ball.

I'm looking for feedback from the CVS community
to see if the effort is worth while.
================================================
I've done a ksh script to backup up the 4 CVS
repositories I'm using.  It's a 2+ phase backup
using CVS read locks to backup the repository
while CVS server processes continue to run.

The script locks only 1 directory at a time
which maximizes CVS repository access.

Phase 1 is a non-stop run through the repository
backing up all directories that can be read locked.
If a read lock fails during phase 1, the directory
in use is placed on a deferred list and phase 1
continues without waiting.

Phase 2 processes all directories on the deferred
list again asserting read locks & without waiting.
At the end of phase 2 the deferred list is checked
and if it still contains entries then a short wait
is performed and phase 2 is repeated until all
directories are backed up.

The backup products are a 1 gzipped tar ball and
1 shell script for each CVS repository.

The tar ball has entries only for files since
attempting to add a directory with tar causes
sub-directory recursion which is not a good
behavior since I'm only locking one directory
at a time.

The output shell script contains commands to
 recreate the CVS repository directory structure
since this information is not in the tar ball.
The output script also has a command to extract
the tar ball.

CVS lock files are excluded from the tar ball so
the CVS repository is immediately ready to use
after it's been restored.
=================================================
Any feedback and feature requests would be well
appreciated.  Thanks in advance.
=================================================
Conrad Pino



_______________________________________________
Info-cvs mailing list
address@hidden
http://mail.gnu.org/mailman/listinfo/info-cvs




reply via email to

[Prev in Thread] Current Thread [Next in Thread]