info-cvs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: cvs pserver performance


From: Russ Tremain
Subject: Re: cvs pserver performance
Date: Thu, 25 Jan 2001 21:03:31 -0800

At 8:58 AM -0800 1/25/01, Larry Jones wrote:
>Russ Tremain writes:
>> 
>> At 4:48 PM -0800 1/24/01, Brian Behlendorf wrote:
>> >
>> >I recommend mounting /tmp on some sort of memory-based filesystem for
>> >maximum performance - this made a big difference on apache.org.
>> 
>> thanks.. this is the next step; unfortunately, have to upgrade
>> memory first.
>> 
>> Also, hard to know how big to make this device; I'm planning on
>> 1 GB, and then writing a cron job to clean up left-over server
>> tmp dirs.
>
>There shouldn't be any left-over server directories unless the server
>crashes or encounters a serious error.  Is this just general caution, or
>have you had an actual problem with left-over directories?

I have, but I think these occur when the client doesn't close down the
connection to the server properly.  Sometimes I have pserver process
that hangs around for a couple of days, and I (or someone else)
kills these off by hand with kill -9.  This will leave the server
directory around.  A normal kill seems to clean up after itself.

Not sure why the connections don't close down on the server... when
I truss them they are waiting on a read.  This is with 10.1.8, so
perhaps when I upgrade to 1.11, these problems may go away.

BTW, in terms of the solaris memory based filesystem (I do work
for Sun, after all :-), the solution is pretty simple:

        # mount -F tmpfs -o size=512m vmem /cvstmp

will create and mount a 512 MB "ram" disk (actually, a virtual memory
based ram disk, the way I read the man page, meaning it is backed
up by swap).  The identifier "vmem" can be whatever you want df
to report as the name of the device.

This really helps pserver performance on updates.  Here are
the results of some tests run on a Sun ultra 60 with a single disk drive
(time is elapsed time for each process in minutes; all tests run
on same machine as the pserver):

+-----------------------+---------------+---------------+
|        what           | UFS /tmp      | tmpfs /tmp    |
+-----------------------+---------------+---------------+
| 1 update:  2321 dirs, |               |               |
|        12,333 files   |   14:11       |    3:42       |
+-----------------------+---------------+---------------+
| 2 updates, per update |   37:36       |    8:07       |
+-----------------------+---------------+---------------+
| 3 updates, per update |   57:08       |    14:50      |
+-----------------------+---------------+---------------+
| cp -r of working dir  |             11:33             |
+-----------------------+-------------------------------+
| Size of working dir   |             117MB             |
+-----------------------+-------------------------------+

Update times are averaged for the multiple update tests.

Note that these updates are against a static repository,
so they are not moving any data to the client.  This means
that almost all of the i/o is reading from the repository
and writing to the tmp directory, which is similar to a
real pserver environment, minus any network i/o.

As a additional test, I added a small tmpfs for the cvs
locks directory and re-ran the 3-update test:

+-----------------------+-------------------------------+
|        what           |  tmpfs /tmp + tmpfs /locks    |
+-----------------------+-------------------------------+
| 3 updates, per update |               3:57            |
+-----------------------+-------------------------------+

The 350% improvement over just /tmp on tmpfs is perhaps a bit
artificial, given that all three processes were started at the
same time. One would expect that this would lead to more
contention for locks than normal. More testing would be needed
to suss this out.

Overall, however, the results are fairly astonishing using
tmpfs for both cvs /tmp and locks - a 1400% improvement over
a simple UFS approach. 

I would recommend at least 500Mb for the cvs tmp directory -
more if you have the memory and are supporting lots of
concurrent users.  In reading the man page, it seems you
have to be careful about some of the limits.  In particular,
the number of files & directories you can allocate is
dependant on how big your physical memory is, even though
you can increase the size of tmpfs by increasing the amount of
swap space.  This limit is important in CVS because it
creates a lot of directories & files.  I ran into problems
on a machine with 512MB, but seemed okay on a 1GB machine.

Not sure how this limit is calculated...

-Russ

>A good alternative to the memory based filesystem is a disk based
>filesystem that supports soft updates (even better than a virtual memory
>based filesystem, not quite as good as a physical memory based
>filesystem but a much more efficient use of resources).  If neither of
>those are viable alternatives, you can at least mount /tmp asynchronous
>if your system supports it, although that pretty-much means that you
>have to recreate it on reboot rather than just fsck-ing it.
>
>-Larry Jones
>
>There's a connection here, I just know it. -- Calvin






reply via email to

[Prev in Thread] Current Thread [Next in Thread]