[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: jar files in cvs repository

From: Paul Sander
Subject: Re: jar files in cvs repository
Date: Mon, 21 Feb 2005 01:41:53 -0800

I'm not really answering your question as I have not done any kind of timing analysis on the various storage methods. However, I have seen other systems that use the following methods. Based on their algorithms, one would expect these to be listed in order of decreasing speed for arbitrary versions. One would also expect these to be listed generally in order of decreasing space, though the last three are roughly equivalent.

- Store each version intact,
- Store each version intact, compressed,
- Store versions using xdelta,
- Store versions using interleaved deltas (like SCCS),
- One RCS file per file/branch pair (each branch gets its own RCS file),
- Store versions using purely reverse-deltas (like RCS).

Note that the performance characteristics of the various delta algorithms may make retrieving certain specific versions faster than the others. For example, RCS stores the most recent HEAD version intact so it's the fastest to retrieve, and RCS slows down depending on how many versions live between the desired one and the HEAD. Xdelta stores several versions this way and uses reverse deltas (but using a more efficient differencing algorithm than RCS does) so there are more fast versions and a cap on the number of deltas to apply. SCCS on the other hand uses a method more akin to #ifdef to conditionally build each version in just one pass, so it will take roughly the same amount of time to fetch any given version, which is slower than decompression or copying directly but faster than applying many diffs.

I've also seen a system compress versions before checking them in to RCS, which was a bad idea. I've also personally modified old versions of RCS and CVS to compress the RCS files to get a space advantage but with slightly degraded performance.

On Feb 20, 2005, at 11:55 PM, address@hidden wrote:

I'm not sure of the speeds, but it's a two-edged sword. Keeping full copies of everything bloats the database requirements for CVS, making the disk space an issue. And I couldn't say whether not using the "diff" to create deltas to keep instead would be faster or slower than checking in a full copy every time. But for source code, etc, I'd quickly say that checking in a full copy is a waste, since it's not that big usually, and the time differences would
be minimal...

Maybe someone else has actually done timing on this?

On Monday 21 February 2005 12:39 am, Jesper Vad Kristensen wrote:
David Bartmess wrote:
Used in the cvswrappers file, the -m gives the mode of the
file to the cvs
admin command, setting the mode of the file to either COPY (do
not delta the
file, put a full version in every time) or to MERGE (put only
delta of file
changes into the repository)...

On Friday 18 February 2005 01:28 am, Jesper Vad Kristensen wrote:
Larry Jones wrote:
It's better to do:

        *.[Gg][Ii][Ff] -k 'b' -m 'COPY'

(Amazing what you can do in CVS!)

But why the -m 'COPY'?

That's very interesting. We're working with binary source code here and have some performance issues when retrieving stuff from branches (due to
the backtracking or whatever it's called).

Would you - or anyone else here - happen to know if storing the whole
copy of the file each time speeds up retrieval in branches?


Jesper Vad Kristensen

Info-cvs mailing list

David A. Bartmess
Software Configuration Manager / Sr. Software Developer
eDingo Enterprises
jSyncManager Development Team (
jSyncManager Email List

 But one should not forget that money can buy a bed but not sleep,
 finery but not beauty, a house but not a home,
 medicine but not health, luxuries but not culture,
 sex but not love, and amusements but not happiness.

Info-cvs mailing list

Paul Sander       | "To do two things at once is to do neither"
address@hidden | Publilius Syrus, Roman philosopher, 100 B.C.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]